00:00:00.001 Started by upstream project "autotest-per-patch" build number 132553 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.137 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.137 The recommended git tool is: git 00:00:00.138 using credential 00000000-0000-0000-0000-000000000002 00:00:00.140 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.173 Fetching changes from the remote Git repository 00:00:00.176 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.209 Using shallow fetch with depth 1 00:00:00.209 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.209 > git --version # timeout=10 00:00:00.242 > git --version # 'git version 2.39.2' 00:00:00.242 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.266 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.266 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.091 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.105 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.119 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.119 > git config core.sparsecheckout # timeout=10 00:00:06.132 > git read-tree -mu HEAD # timeout=10 00:00:06.149 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.172 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.172 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.279 [Pipeline] Start of Pipeline 00:00:06.299 [Pipeline] library 00:00:06.302 Loading library shm_lib@master 00:00:06.302 Library shm_lib@master is cached. Copying from home. 00:00:06.323 [Pipeline] node 00:00:06.335 Running on WFP37 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.337 [Pipeline] { 00:00:06.349 [Pipeline] catchError 00:00:06.351 [Pipeline] { 00:00:06.366 [Pipeline] wrap 00:00:06.375 [Pipeline] { 00:00:06.383 [Pipeline] stage 00:00:06.384 [Pipeline] { (Prologue) 00:00:06.602 [Pipeline] sh 00:00:06.883 + logger -p user.info -t JENKINS-CI 00:00:06.900 [Pipeline] echo 00:00:06.902 Node: WFP37 00:00:06.909 [Pipeline] sh 00:00:07.207 [Pipeline] setCustomBuildProperty 00:00:07.218 [Pipeline] echo 00:00:07.220 Cleanup processes 00:00:07.223 [Pipeline] sh 00:00:07.503 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.503 3616074 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.514 [Pipeline] sh 00:00:07.798 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.798 ++ grep -v 'sudo pgrep' 00:00:07.798 ++ awk '{print $1}' 00:00:07.798 + sudo kill -9 00:00:07.798 + true 00:00:07.811 [Pipeline] cleanWs 00:00:07.820 [WS-CLEANUP] Deleting project workspace... 00:00:07.820 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.826 [WS-CLEANUP] done 00:00:07.828 [Pipeline] setCustomBuildProperty 00:00:07.843 [Pipeline] sh 00:00:08.127 + sudo git config --global --replace-all safe.directory '*' 00:00:08.207 [Pipeline] httpRequest 00:00:08.721 [Pipeline] echo 00:00:08.723 Sorcerer 10.211.164.20 is alive 00:00:08.732 [Pipeline] retry 00:00:08.733 [Pipeline] { 00:00:08.747 [Pipeline] httpRequest 00:00:08.751 HttpMethod: GET 00:00:08.752 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.752 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.769 Response Code: HTTP/1.1 200 OK 00:00:08.769 Success: Status code 200 is in the accepted range: 200,404 00:00:08.769 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.746 [Pipeline] } 00:00:13.769 [Pipeline] // retry 00:00:13.779 [Pipeline] sh 00:00:14.064 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.081 [Pipeline] httpRequest 00:00:14.464 [Pipeline] echo 00:00:14.466 Sorcerer 10.211.164.20 is alive 00:00:14.479 [Pipeline] retry 00:00:14.481 [Pipeline] { 00:00:14.510 [Pipeline] httpRequest 00:00:14.515 HttpMethod: GET 00:00:14.515 URL: http://10.211.164.20/packages/spdk_fb1630bf795829450ae8d0e1f4674daf38daa608.tar.gz 00:00:14.516 Sending request to url: http://10.211.164.20/packages/spdk_fb1630bf795829450ae8d0e1f4674daf38daa608.tar.gz 00:00:14.533 Response Code: HTTP/1.1 200 OK 00:00:14.533 Success: Status code 200 is in the accepted range: 200,404 00:00:14.533 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_fb1630bf795829450ae8d0e1f4674daf38daa608.tar.gz 00:01:08.281 [Pipeline] } 00:01:08.302 [Pipeline] // retry 00:01:08.312 [Pipeline] sh 00:01:08.602 + tar --no-same-owner -xf spdk_fb1630bf795829450ae8d0e1f4674daf38daa608.tar.gz 00:01:11.149 [Pipeline] sh 00:01:11.434 + git -C spdk log --oneline -n5 00:01:11.434 fb1630bf7 bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:01:11.434 67afc973b bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:01:11.434 16e5e505a bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:01:11.434 20b346609 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:01:11.434 2a91567e4 CHANGELOG.md: corrected typo 00:01:11.445 [Pipeline] } 00:01:11.462 [Pipeline] // stage 00:01:11.471 [Pipeline] stage 00:01:11.473 [Pipeline] { (Prepare) 00:01:11.493 [Pipeline] writeFile 00:01:11.511 [Pipeline] sh 00:01:11.794 + logger -p user.info -t JENKINS-CI 00:01:11.808 [Pipeline] sh 00:01:12.097 + logger -p user.info -t JENKINS-CI 00:01:12.110 [Pipeline] sh 00:01:12.393 + cat autorun-spdk.conf 00:01:12.393 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.393 SPDK_TEST_NVMF=1 00:01:12.393 SPDK_TEST_NVME_CLI=1 00:01:12.393 SPDK_TEST_NVMF_NICS=mlx5 00:01:12.393 SPDK_RUN_UBSAN=1 00:01:12.393 NET_TYPE=phy 00:01:12.400 RUN_NIGHTLY=0 00:01:12.404 [Pipeline] readFile 00:01:12.426 [Pipeline] withEnv 00:01:12.428 [Pipeline] { 00:01:12.440 [Pipeline] sh 00:01:12.723 + set -ex 00:01:12.723 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:12.723 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:12.723 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.723 ++ SPDK_TEST_NVMF=1 00:01:12.723 ++ SPDK_TEST_NVME_CLI=1 00:01:12.723 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:12.723 ++ SPDK_RUN_UBSAN=1 00:01:12.723 ++ NET_TYPE=phy 00:01:12.723 ++ RUN_NIGHTLY=0 00:01:12.723 + case $SPDK_TEST_NVMF_NICS in 00:01:12.723 + DRIVERS=mlx5_ib 00:01:12.723 + [[ -n mlx5_ib ]] 00:01:12.723 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.723 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.296 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.296 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.296 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.296 + true 00:01:19.296 + for D in $DRIVERS 00:01:19.296 + sudo modprobe mlx5_ib 00:01:19.296 + exit 0 00:01:19.306 [Pipeline] } 00:01:19.324 [Pipeline] // withEnv 00:01:19.329 [Pipeline] } 00:01:19.348 [Pipeline] // stage 00:01:19.360 [Pipeline] catchError 00:01:19.362 [Pipeline] { 00:01:19.374 [Pipeline] timeout 00:01:19.374 Timeout set to expire in 1 hr 0 min 00:01:19.376 [Pipeline] { 00:01:19.386 [Pipeline] stage 00:01:19.388 [Pipeline] { (Tests) 00:01:19.398 [Pipeline] sh 00:01:19.679 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:19.679 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:19.679 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:19.679 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:19.679 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:19.679 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:19.679 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:19.679 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:19.679 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:19.679 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:19.679 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:19.679 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:19.679 + source /etc/os-release 00:01:19.679 ++ NAME='Fedora Linux' 00:01:19.679 ++ VERSION='39 (Cloud Edition)' 00:01:19.679 ++ ID=fedora 00:01:19.679 ++ VERSION_ID=39 00:01:19.679 ++ VERSION_CODENAME= 00:01:19.679 ++ PLATFORM_ID=platform:f39 00:01:19.679 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:19.679 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.679 ++ LOGO=fedora-logo-icon 00:01:19.679 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:19.679 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.679 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:19.679 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.679 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.679 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.679 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:19.679 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.679 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:19.679 ++ SUPPORT_END=2024-11-12 00:01:19.679 ++ VARIANT='Cloud Edition' 00:01:19.679 ++ VARIANT_ID=cloud 00:01:19.679 + uname -a 00:01:19.679 Linux spdk-wfp-37 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:19.679 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:22.215 Hugepages 00:01:22.215 node hugesize free / total 00:01:22.215 node0 1048576kB 0 / 0 00:01:22.215 node0 2048kB 0 / 0 00:01:22.215 node1 1048576kB 0 / 0 00:01:22.215 node1 2048kB 0 / 0 00:01:22.215 00:01:22.215 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.215 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:22.215 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:22.215 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:22.215 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:22.215 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:22.215 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:22.215 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:22.215 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:22.215 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:22.215 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:22.215 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:22.215 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:22.215 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:22.215 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:22.215 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:22.215 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:22.215 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:22.215 + rm -f /tmp/spdk-ld-path 00:01:22.215 + source autorun-spdk.conf 00:01:22.215 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.215 ++ SPDK_TEST_NVMF=1 00:01:22.215 ++ SPDK_TEST_NVME_CLI=1 00:01:22.215 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:22.215 ++ SPDK_RUN_UBSAN=1 00:01:22.215 ++ NET_TYPE=phy 00:01:22.215 ++ RUN_NIGHTLY=0 00:01:22.215 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.215 + [[ -n '' ]] 00:01:22.215 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:22.215 + for M in /var/spdk/build-*-manifest.txt 00:01:22.215 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:22.215 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:22.215 + for M in /var/spdk/build-*-manifest.txt 00:01:22.215 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.215 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:22.215 + for M in /var/spdk/build-*-manifest.txt 00:01:22.215 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.215 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:22.215 ++ uname 00:01:22.215 + [[ Linux == \L\i\n\u\x ]] 00:01:22.215 + sudo dmesg -T 00:01:22.215 + sudo dmesg --clear 00:01:22.215 + dmesg_pid=3617653 00:01:22.215 + [[ Fedora Linux == FreeBSD ]] 00:01:22.215 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.215 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.215 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.215 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:22.215 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:22.215 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.215 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.215 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.215 + sudo dmesg -Tw 00:01:22.215 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.215 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.215 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.215 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.215 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.215 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.215 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.215 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.215 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:22.476 20:58:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:22.476 20:58:09 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:22.476 20:58:09 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.476 20:58:09 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:22.476 20:58:09 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:22.476 20:58:09 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:22.476 20:58:09 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_UBSAN=1 00:01:22.476 20:58:09 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ NET_TYPE=phy 00:01:22.476 20:58:09 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ RUN_NIGHTLY=0 00:01:22.476 20:58:09 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:22.476 20:58:09 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:22.476 20:58:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:22.476 20:58:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:22.476 20:58:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:22.476 20:58:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.476 20:58:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.476 20:58:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.476 20:58:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.476 20:58:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.476 20:58:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.476 20:58:09 -- paths/export.sh@5 -- $ export PATH 00:01:22.476 20:58:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.476 20:58:09 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:22.476 20:58:09 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:22.476 20:58:09 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732651089.XXXXXX 00:01:22.476 20:58:09 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732651089.vPWV6c 00:01:22.476 20:58:09 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:22.476 20:58:09 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:22.476 20:58:09 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:22.476 20:58:09 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:22.476 20:58:09 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.476 20:58:09 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:22.476 20:58:09 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:22.476 20:58:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.476 20:58:09 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:22.476 20:58:09 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:22.476 20:58:09 -- pm/common@17 -- $ local monitor 00:01:22.476 20:58:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.476 20:58:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.476 20:58:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.476 20:58:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.476 20:58:09 -- pm/common@25 -- $ sleep 1 00:01:22.476 20:58:09 -- pm/common@21 -- $ date +%s 00:01:22.476 20:58:09 -- pm/common@21 -- $ date +%s 00:01:22.476 20:58:09 -- pm/common@21 -- $ date +%s 00:01:22.476 20:58:09 -- pm/common@21 -- $ date +%s 00:01:22.476 20:58:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732651089 00:01:22.476 20:58:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732651089 00:01:22.476 20:58:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732651089 00:01:22.476 20:58:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732651089 00:01:22.476 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732651089_collect-cpu-load.pm.log 00:01:22.476 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732651089_collect-vmstat.pm.log 00:01:22.476 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732651089_collect-cpu-temp.pm.log 00:01:22.476 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732651089_collect-bmc-pm.bmc.pm.log 00:01:23.415 20:58:10 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:23.415 20:58:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.415 20:58:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.415 20:58:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:23.415 20:58:10 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.415 Tue Nov 26 07:58:10 PM UTC 2024 00:01:23.415 20:58:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.415 v25.01-pre-244-gfb1630bf7 00:01:23.415 20:58:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:23.415 20:58:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.415 20:58:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.415 20:58:10 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:23.415 20:58:10 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:23.415 20:58:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.415 ************************************ 00:01:23.415 START TEST ubsan 00:01:23.415 ************************************ 00:01:23.415 20:58:10 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:23.415 using ubsan 00:01:23.415 00:01:23.415 real 0m0.000s 00:01:23.415 user 0m0.000s 00:01:23.415 sys 0m0.000s 00:01:23.415 20:58:10 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:23.415 20:58:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.415 ************************************ 00:01:23.415 END TEST ubsan 00:01:23.415 ************************************ 00:01:23.674 20:58:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:23.674 20:58:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:23.674 20:58:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:23.674 20:58:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:23.674 20:58:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.674 20:58:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.674 20:58:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.674 20:58:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:23.674 20:58:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:23.674 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:23.674 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:23.933 Using 'verbs' RDMA provider 00:01:36.808 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:49.025 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:49.025 Creating mk/config.mk...done. 00:01:49.025 Creating mk/cc.flags.mk...done. 00:01:49.025 Type 'make' to build. 00:01:49.025 20:58:35 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:01:49.025 20:58:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:49.025 20:58:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:49.025 20:58:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.025 ************************************ 00:01:49.025 START TEST make 00:01:49.025 ************************************ 00:01:49.025 20:58:35 make -- common/autotest_common.sh@1129 -- $ make -j112 00:01:49.025 make[1]: Nothing to be done for 'all'. 00:01:55.586 The Meson build system 00:01:55.586 Version: 1.5.0 00:01:55.586 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:55.586 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:55.586 Build type: native build 00:01:55.586 Program cat found: YES (/usr/bin/cat) 00:01:55.586 Project name: DPDK 00:01:55.586 Project version: 24.03.0 00:01:55.586 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:55.586 C linker for the host machine: cc ld.bfd 2.40-14 00:01:55.586 Host machine cpu family: x86_64 00:01:55.586 Host machine cpu: x86_64 00:01:55.586 Message: ## Building in Developer Mode ## 00:01:55.586 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:55.586 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:55.586 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:55.586 Program python3 found: YES (/usr/bin/python3) 00:01:55.586 Program cat found: YES (/usr/bin/cat) 00:01:55.586 Compiler for C supports arguments -march=native: YES 00:01:55.586 Checking for size of "void *" : 8 00:01:55.586 Checking for size of "void *" : 8 (cached) 00:01:55.586 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:55.586 Library m found: YES 00:01:55.586 Library numa found: YES 00:01:55.586 Has header "numaif.h" : YES 00:01:55.586 Library fdt found: NO 00:01:55.586 Library execinfo found: NO 00:01:55.586 Has header "execinfo.h" : YES 00:01:55.586 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:55.586 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:55.586 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:55.586 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:55.586 Run-time dependency openssl found: YES 3.1.1 00:01:55.586 Run-time dependency libpcap found: YES 1.10.4 00:01:55.586 Has header "pcap.h" with dependency libpcap: YES 00:01:55.586 Compiler for C supports arguments -Wcast-qual: YES 00:01:55.586 Compiler for C supports arguments -Wdeprecated: YES 00:01:55.586 Compiler for C supports arguments -Wformat: YES 00:01:55.586 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:55.586 Compiler for C supports arguments -Wformat-security: NO 00:01:55.586 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:55.586 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:55.586 Compiler for C supports arguments -Wnested-externs: YES 00:01:55.586 Compiler for C supports arguments -Wold-style-definition: YES 00:01:55.586 Compiler for C supports arguments -Wpointer-arith: YES 00:01:55.586 Compiler for C supports arguments -Wsign-compare: YES 00:01:55.586 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:55.586 Compiler for C supports arguments -Wundef: YES 00:01:55.586 Compiler for C supports arguments -Wwrite-strings: YES 00:01:55.586 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:55.586 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:55.586 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:55.586 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:55.586 Program objdump found: YES (/usr/bin/objdump) 00:01:55.586 Compiler for C supports arguments -mavx512f: YES 00:01:55.586 Checking if "AVX512 checking" compiles: YES 00:01:55.586 Fetching value of define "__SSE4_2__" : 1 00:01:55.586 Fetching value of define "__AES__" : 1 00:01:55.586 Fetching value of define "__AVX__" : 1 00:01:55.586 Fetching value of define "__AVX2__" : 1 00:01:55.586 Fetching value of define "__AVX512BW__" : 1 00:01:55.586 Fetching value of define "__AVX512CD__" : 1 00:01:55.586 Fetching value of define "__AVX512DQ__" : 1 00:01:55.586 Fetching value of define "__AVX512F__" : 1 00:01:55.586 Fetching value of define "__AVX512VL__" : 1 00:01:55.586 Fetching value of define "__PCLMUL__" : 1 00:01:55.586 Fetching value of define "__RDRND__" : 1 00:01:55.586 Fetching value of define "__RDSEED__" : 1 00:01:55.586 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:55.586 Fetching value of define "__znver1__" : (undefined) 00:01:55.586 Fetching value of define "__znver2__" : (undefined) 00:01:55.586 Fetching value of define "__znver3__" : (undefined) 00:01:55.586 Fetching value of define "__znver4__" : (undefined) 00:01:55.586 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:55.586 Message: lib/log: Defining dependency "log" 00:01:55.586 Message: lib/kvargs: Defining dependency "kvargs" 00:01:55.586 Message: lib/telemetry: Defining dependency "telemetry" 00:01:55.586 Checking for function "getentropy" : NO 00:01:55.586 Message: lib/eal: Defining dependency "eal" 00:01:55.586 Message: lib/ring: Defining dependency "ring" 00:01:55.586 Message: lib/rcu: Defining dependency "rcu" 00:01:55.586 Message: lib/mempool: Defining dependency "mempool" 00:01:55.586 Message: lib/mbuf: Defining dependency "mbuf" 00:01:55.586 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:55.586 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:55.586 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:55.586 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:55.586 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:55.586 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:55.586 Compiler for C supports arguments -mpclmul: YES 00:01:55.586 Compiler for C supports arguments -maes: YES 00:01:55.586 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:55.586 Compiler for C supports arguments -mavx512bw: YES 00:01:55.586 Compiler for C supports arguments -mavx512dq: YES 00:01:55.586 Compiler for C supports arguments -mavx512vl: YES 00:01:55.586 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:55.586 Compiler for C supports arguments -mavx2: YES 00:01:55.586 Compiler for C supports arguments -mavx: YES 00:01:55.586 Message: lib/net: Defining dependency "net" 00:01:55.586 Message: lib/meter: Defining dependency "meter" 00:01:55.586 Message: lib/ethdev: Defining dependency "ethdev" 00:01:55.586 Message: lib/pci: Defining dependency "pci" 00:01:55.586 Message: lib/cmdline: Defining dependency "cmdline" 00:01:55.586 Message: lib/hash: Defining dependency "hash" 00:01:55.586 Message: lib/timer: Defining dependency "timer" 00:01:55.586 Message: lib/compressdev: Defining dependency "compressdev" 00:01:55.586 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:55.586 Message: lib/dmadev: Defining dependency "dmadev" 00:01:55.586 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:55.586 Message: lib/power: Defining dependency "power" 00:01:55.586 Message: lib/reorder: Defining dependency "reorder" 00:01:55.586 Message: lib/security: Defining dependency "security" 00:01:55.586 Has header "linux/userfaultfd.h" : YES 00:01:55.586 Has header "linux/vduse.h" : YES 00:01:55.586 Message: lib/vhost: Defining dependency "vhost" 00:01:55.586 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:55.586 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:55.586 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:55.586 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:55.586 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:55.586 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:55.586 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:55.586 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:55.586 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:55.586 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:55.586 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:55.586 Configuring doxy-api-html.conf using configuration 00:01:55.586 Configuring doxy-api-man.conf using configuration 00:01:55.586 Program mandb found: YES (/usr/bin/mandb) 00:01:55.586 Program sphinx-build found: NO 00:01:55.586 Configuring rte_build_config.h using configuration 00:01:55.586 Message: 00:01:55.586 ================= 00:01:55.586 Applications Enabled 00:01:55.586 ================= 00:01:55.586 00:01:55.586 apps: 00:01:55.586 00:01:55.586 00:01:55.586 Message: 00:01:55.586 ================= 00:01:55.586 Libraries Enabled 00:01:55.586 ================= 00:01:55.586 00:01:55.586 libs: 00:01:55.586 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:55.586 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:55.587 cryptodev, dmadev, power, reorder, security, vhost, 00:01:55.587 00:01:55.587 Message: 00:01:55.587 =============== 00:01:55.587 Drivers Enabled 00:01:55.587 =============== 00:01:55.587 00:01:55.587 common: 00:01:55.587 00:01:55.587 bus: 00:01:55.587 pci, vdev, 00:01:55.587 mempool: 00:01:55.587 ring, 00:01:55.587 dma: 00:01:55.587 00:01:55.587 net: 00:01:55.587 00:01:55.587 crypto: 00:01:55.587 00:01:55.587 compress: 00:01:55.587 00:01:55.587 vdpa: 00:01:55.587 00:01:55.587 00:01:55.587 Message: 00:01:55.587 ================= 00:01:55.587 Content Skipped 00:01:55.587 ================= 00:01:55.587 00:01:55.587 apps: 00:01:55.587 dumpcap: explicitly disabled via build config 00:01:55.587 graph: explicitly disabled via build config 00:01:55.587 pdump: explicitly disabled via build config 00:01:55.587 proc-info: explicitly disabled via build config 00:01:55.587 test-acl: explicitly disabled via build config 00:01:55.587 test-bbdev: explicitly disabled via build config 00:01:55.587 test-cmdline: explicitly disabled via build config 00:01:55.587 test-compress-perf: explicitly disabled via build config 00:01:55.587 test-crypto-perf: explicitly disabled via build config 00:01:55.587 test-dma-perf: explicitly disabled via build config 00:01:55.587 test-eventdev: explicitly disabled via build config 00:01:55.587 test-fib: explicitly disabled via build config 00:01:55.587 test-flow-perf: explicitly disabled via build config 00:01:55.587 test-gpudev: explicitly disabled via build config 00:01:55.587 test-mldev: explicitly disabled via build config 00:01:55.587 test-pipeline: explicitly disabled via build config 00:01:55.587 test-pmd: explicitly disabled via build config 00:01:55.587 test-regex: explicitly disabled via build config 00:01:55.587 test-sad: explicitly disabled via build config 00:01:55.587 test-security-perf: explicitly disabled via build config 00:01:55.587 00:01:55.587 libs: 00:01:55.587 argparse: explicitly disabled via build config 00:01:55.587 metrics: explicitly disabled via build config 00:01:55.587 acl: explicitly disabled via build config 00:01:55.587 bbdev: explicitly disabled via build config 00:01:55.587 bitratestats: explicitly disabled via build config 00:01:55.587 bpf: explicitly disabled via build config 00:01:55.587 cfgfile: explicitly disabled via build config 00:01:55.587 distributor: explicitly disabled via build config 00:01:55.587 efd: explicitly disabled via build config 00:01:55.587 eventdev: explicitly disabled via build config 00:01:55.587 dispatcher: explicitly disabled via build config 00:01:55.587 gpudev: explicitly disabled via build config 00:01:55.587 gro: explicitly disabled via build config 00:01:55.587 gso: explicitly disabled via build config 00:01:55.587 ip_frag: explicitly disabled via build config 00:01:55.587 jobstats: explicitly disabled via build config 00:01:55.587 latencystats: explicitly disabled via build config 00:01:55.587 lpm: explicitly disabled via build config 00:01:55.587 member: explicitly disabled via build config 00:01:55.587 pcapng: explicitly disabled via build config 00:01:55.587 rawdev: explicitly disabled via build config 00:01:55.587 regexdev: explicitly disabled via build config 00:01:55.587 mldev: explicitly disabled via build config 00:01:55.587 rib: explicitly disabled via build config 00:01:55.587 sched: explicitly disabled via build config 00:01:55.587 stack: explicitly disabled via build config 00:01:55.587 ipsec: explicitly disabled via build config 00:01:55.587 pdcp: explicitly disabled via build config 00:01:55.587 fib: explicitly disabled via build config 00:01:55.587 port: explicitly disabled via build config 00:01:55.587 pdump: explicitly disabled via build config 00:01:55.587 table: explicitly disabled via build config 00:01:55.587 pipeline: explicitly disabled via build config 00:01:55.587 graph: explicitly disabled via build config 00:01:55.587 node: explicitly disabled via build config 00:01:55.587 00:01:55.587 drivers: 00:01:55.587 common/cpt: not in enabled drivers build config 00:01:55.587 common/dpaax: not in enabled drivers build config 00:01:55.587 common/iavf: not in enabled drivers build config 00:01:55.587 common/idpf: not in enabled drivers build config 00:01:55.587 common/ionic: not in enabled drivers build config 00:01:55.587 common/mvep: not in enabled drivers build config 00:01:55.587 common/octeontx: not in enabled drivers build config 00:01:55.587 bus/auxiliary: not in enabled drivers build config 00:01:55.587 bus/cdx: not in enabled drivers build config 00:01:55.587 bus/dpaa: not in enabled drivers build config 00:01:55.587 bus/fslmc: not in enabled drivers build config 00:01:55.587 bus/ifpga: not in enabled drivers build config 00:01:55.587 bus/platform: not in enabled drivers build config 00:01:55.587 bus/uacce: not in enabled drivers build config 00:01:55.587 bus/vmbus: not in enabled drivers build config 00:01:55.587 common/cnxk: not in enabled drivers build config 00:01:55.587 common/mlx5: not in enabled drivers build config 00:01:55.587 common/nfp: not in enabled drivers build config 00:01:55.587 common/nitrox: not in enabled drivers build config 00:01:55.587 common/qat: not in enabled drivers build config 00:01:55.587 common/sfc_efx: not in enabled drivers build config 00:01:55.587 mempool/bucket: not in enabled drivers build config 00:01:55.587 mempool/cnxk: not in enabled drivers build config 00:01:55.587 mempool/dpaa: not in enabled drivers build config 00:01:55.587 mempool/dpaa2: not in enabled drivers build config 00:01:55.587 mempool/octeontx: not in enabled drivers build config 00:01:55.587 mempool/stack: not in enabled drivers build config 00:01:55.587 dma/cnxk: not in enabled drivers build config 00:01:55.587 dma/dpaa: not in enabled drivers build config 00:01:55.587 dma/dpaa2: not in enabled drivers build config 00:01:55.587 dma/hisilicon: not in enabled drivers build config 00:01:55.587 dma/idxd: not in enabled drivers build config 00:01:55.587 dma/ioat: not in enabled drivers build config 00:01:55.587 dma/skeleton: not in enabled drivers build config 00:01:55.587 net/af_packet: not in enabled drivers build config 00:01:55.587 net/af_xdp: not in enabled drivers build config 00:01:55.587 net/ark: not in enabled drivers build config 00:01:55.587 net/atlantic: not in enabled drivers build config 00:01:55.587 net/avp: not in enabled drivers build config 00:01:55.587 net/axgbe: not in enabled drivers build config 00:01:55.587 net/bnx2x: not in enabled drivers build config 00:01:55.587 net/bnxt: not in enabled drivers build config 00:01:55.587 net/bonding: not in enabled drivers build config 00:01:55.587 net/cnxk: not in enabled drivers build config 00:01:55.587 net/cpfl: not in enabled drivers build config 00:01:55.587 net/cxgbe: not in enabled drivers build config 00:01:55.587 net/dpaa: not in enabled drivers build config 00:01:55.587 net/dpaa2: not in enabled drivers build config 00:01:55.587 net/e1000: not in enabled drivers build config 00:01:55.587 net/ena: not in enabled drivers build config 00:01:55.587 net/enetc: not in enabled drivers build config 00:01:55.587 net/enetfec: not in enabled drivers build config 00:01:55.587 net/enic: not in enabled drivers build config 00:01:55.587 net/failsafe: not in enabled drivers build config 00:01:55.587 net/fm10k: not in enabled drivers build config 00:01:55.587 net/gve: not in enabled drivers build config 00:01:55.587 net/hinic: not in enabled drivers build config 00:01:55.587 net/hns3: not in enabled drivers build config 00:01:55.587 net/i40e: not in enabled drivers build config 00:01:55.587 net/iavf: not in enabled drivers build config 00:01:55.587 net/ice: not in enabled drivers build config 00:01:55.587 net/idpf: not in enabled drivers build config 00:01:55.587 net/igc: not in enabled drivers build config 00:01:55.587 net/ionic: not in enabled drivers build config 00:01:55.587 net/ipn3ke: not in enabled drivers build config 00:01:55.587 net/ixgbe: not in enabled drivers build config 00:01:55.587 net/mana: not in enabled drivers build config 00:01:55.587 net/memif: not in enabled drivers build config 00:01:55.587 net/mlx4: not in enabled drivers build config 00:01:55.587 net/mlx5: not in enabled drivers build config 00:01:55.587 net/mvneta: not in enabled drivers build config 00:01:55.587 net/mvpp2: not in enabled drivers build config 00:01:55.587 net/netvsc: not in enabled drivers build config 00:01:55.587 net/nfb: not in enabled drivers build config 00:01:55.587 net/nfp: not in enabled drivers build config 00:01:55.587 net/ngbe: not in enabled drivers build config 00:01:55.587 net/null: not in enabled drivers build config 00:01:55.587 net/octeontx: not in enabled drivers build config 00:01:55.587 net/octeon_ep: not in enabled drivers build config 00:01:55.587 net/pcap: not in enabled drivers build config 00:01:55.587 net/pfe: not in enabled drivers build config 00:01:55.587 net/qede: not in enabled drivers build config 00:01:55.587 net/ring: not in enabled drivers build config 00:01:55.587 net/sfc: not in enabled drivers build config 00:01:55.587 net/softnic: not in enabled drivers build config 00:01:55.587 net/tap: not in enabled drivers build config 00:01:55.587 net/thunderx: not in enabled drivers build config 00:01:55.587 net/txgbe: not in enabled drivers build config 00:01:55.587 net/vdev_netvsc: not in enabled drivers build config 00:01:55.587 net/vhost: not in enabled drivers build config 00:01:55.587 net/virtio: not in enabled drivers build config 00:01:55.587 net/vmxnet3: not in enabled drivers build config 00:01:55.587 raw/*: missing internal dependency, "rawdev" 00:01:55.587 crypto/armv8: not in enabled drivers build config 00:01:55.587 crypto/bcmfs: not in enabled drivers build config 00:01:55.587 crypto/caam_jr: not in enabled drivers build config 00:01:55.587 crypto/ccp: not in enabled drivers build config 00:01:55.587 crypto/cnxk: not in enabled drivers build config 00:01:55.587 crypto/dpaa_sec: not in enabled drivers build config 00:01:55.587 crypto/dpaa2_sec: not in enabled drivers build config 00:01:55.587 crypto/ipsec_mb: not in enabled drivers build config 00:01:55.587 crypto/mlx5: not in enabled drivers build config 00:01:55.587 crypto/mvsam: not in enabled drivers build config 00:01:55.587 crypto/nitrox: not in enabled drivers build config 00:01:55.587 crypto/null: not in enabled drivers build config 00:01:55.587 crypto/octeontx: not in enabled drivers build config 00:01:55.587 crypto/openssl: not in enabled drivers build config 00:01:55.587 crypto/scheduler: not in enabled drivers build config 00:01:55.588 crypto/uadk: not in enabled drivers build config 00:01:55.588 crypto/virtio: not in enabled drivers build config 00:01:55.588 compress/isal: not in enabled drivers build config 00:01:55.588 compress/mlx5: not in enabled drivers build config 00:01:55.588 compress/nitrox: not in enabled drivers build config 00:01:55.588 compress/octeontx: not in enabled drivers build config 00:01:55.588 compress/zlib: not in enabled drivers build config 00:01:55.588 regex/*: missing internal dependency, "regexdev" 00:01:55.588 ml/*: missing internal dependency, "mldev" 00:01:55.588 vdpa/ifc: not in enabled drivers build config 00:01:55.588 vdpa/mlx5: not in enabled drivers build config 00:01:55.588 vdpa/nfp: not in enabled drivers build config 00:01:55.588 vdpa/sfc: not in enabled drivers build config 00:01:55.588 event/*: missing internal dependency, "eventdev" 00:01:55.588 baseband/*: missing internal dependency, "bbdev" 00:01:55.588 gpu/*: missing internal dependency, "gpudev" 00:01:55.588 00:01:55.588 00:01:55.588 Build targets in project: 85 00:01:55.588 00:01:55.588 DPDK 24.03.0 00:01:55.588 00:01:55.588 User defined options 00:01:55.588 buildtype : debug 00:01:55.588 default_library : shared 00:01:55.588 libdir : lib 00:01:55.588 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:55.588 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:55.588 c_link_args : 00:01:55.588 cpu_instruction_set: native 00:01:55.588 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:55.588 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:55.588 enable_docs : false 00:01:55.588 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:55.588 enable_kmods : false 00:01:55.588 max_lcores : 128 00:01:55.588 tests : false 00:01:55.588 00:01:55.588 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.162 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:56.162 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:56.162 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:56.162 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:56.162 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:56.162 [5/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:56.162 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:56.162 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:56.162 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:56.162 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:56.423 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:56.423 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:56.423 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:56.423 [13/268] Linking static target lib/librte_kvargs.a 00:01:56.423 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:56.423 [15/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:56.423 [16/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.423 [17/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.423 [18/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.423 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:56.423 [20/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:56.423 [21/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:56.423 [22/268] Linking static target lib/librte_pci.a 00:01:56.423 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.423 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.423 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.423 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.423 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.423 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.423 [29/268] Linking static target lib/librte_log.a 00:01:56.423 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.423 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.423 [32/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:56.423 [33/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:56.423 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:56.682 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:56.682 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:56.682 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:56.682 [38/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:56.682 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:56.682 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:56.682 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:56.682 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:56.682 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:56.682 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:56.682 [45/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:56.682 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:56.682 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:56.682 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:56.682 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:56.682 [50/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:56.682 [51/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:56.682 [52/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:56.682 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:56.682 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:56.682 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:56.682 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:56.682 [57/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:56.682 [58/268] Linking static target lib/librte_meter.a 00:01:56.682 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:56.682 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:56.682 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:56.682 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:56.682 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:56.682 [64/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:56.682 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:56.682 [66/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:56.682 [67/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:56.682 [68/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:56.682 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:56.682 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:56.682 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:56.682 [72/268] Linking static target lib/librte_ring.a 00:01:56.682 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.682 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:56.682 [75/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:56.682 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:56.682 [77/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:56.682 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:56.682 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:56.682 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:56.682 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:56.682 [82/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:56.682 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:56.682 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:56.682 [85/268] Linking static target lib/librte_telemetry.a 00:01:56.682 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:56.682 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:56.682 [88/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:56.682 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:56.682 [90/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:56.682 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:56.682 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:56.682 [93/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:56.682 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:56.682 [95/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.682 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:56.682 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:56.682 [98/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:56.682 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:56.941 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:56.941 [101/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:56.941 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:56.941 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:56.941 [104/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:56.941 [105/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:56.941 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:56.941 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:56.941 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:56.941 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.941 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.941 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:56.941 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:56.941 [113/268] Linking static target lib/librte_cmdline.a 00:01:56.941 [114/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.941 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.941 [116/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:56.941 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:56.941 [118/268] Linking static target lib/librte_net.a 00:01:56.941 [119/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.941 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.941 [121/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:56.941 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:56.941 [123/268] Linking static target lib/librte_timer.a 00:01:56.941 [124/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:56.941 [125/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:56.941 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:56.941 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:56.941 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.941 [129/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:56.941 [130/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:56.941 [131/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:56.941 [132/268] Linking static target lib/librte_dmadev.a 00:01:56.941 [133/268] Linking static target lib/librte_mempool.a 00:01:56.941 [134/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:56.941 [135/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:56.941 [136/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:56.941 [137/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:56.941 [138/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:56.941 [139/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:56.941 [140/268] Linking static target lib/librte_rcu.a 00:01:56.941 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:56.941 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:56.941 [143/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:56.941 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.941 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.941 [146/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:56.941 [147/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.941 [148/268] Linking static target lib/librte_eal.a 00:01:56.941 [149/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:56.941 [150/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:56.941 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:56.941 [152/268] Linking static target lib/librte_compressdev.a 00:01:56.941 [153/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:56.941 [154/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.941 [155/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:56.941 [156/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.941 [157/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:57.200 [158/268] Linking target lib/librte_log.so.24.1 00:01:57.200 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:57.200 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:57.200 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:57.200 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:57.200 [163/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:57.200 [164/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:57.200 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:57.200 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:57.200 [167/268] Linking static target lib/librte_power.a 00:01:57.200 [168/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:57.200 [169/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.200 [170/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:57.200 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:57.200 [172/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:57.200 [173/268] Linking static target lib/librte_reorder.a 00:01:57.200 [174/268] Linking static target lib/librte_hash.a 00:01:57.200 [175/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:57.200 [176/268] Linking static target lib/librte_mbuf.a 00:01:57.200 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:57.200 [178/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:57.200 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:57.200 [180/268] Linking static target lib/librte_security.a 00:01:57.200 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:57.200 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:57.200 [183/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:57.200 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:57.200 [185/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:57.200 [186/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:57.200 [187/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:57.200 [188/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:57.200 [189/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.200 [190/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:57.200 [191/268] Linking target lib/librte_kvargs.so.24.1 00:01:57.200 [192/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.200 [193/268] Linking target lib/librte_telemetry.so.24.1 00:01:57.200 [194/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:57.200 [195/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.200 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:57.458 [197/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:57.458 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:57.458 [199/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:57.458 [200/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:57.458 [201/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:57.458 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:57.458 [203/268] Linking static target lib/librte_cryptodev.a 00:01:57.458 [204/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:57.458 [205/268] Linking static target drivers/librte_bus_pci.a 00:01:57.458 [206/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.458 [207/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:57.458 [208/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.458 [209/268] Linking static target drivers/librte_bus_vdev.a 00:01:57.458 [210/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:57.458 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.458 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.458 [213/268] Linking static target drivers/librte_mempool_ring.a 00:01:57.458 [214/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.458 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.716 [216/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.716 [217/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.716 [218/268] Linking static target lib/librte_ethdev.a 00:01:57.716 [219/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.716 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.716 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.974 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.974 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.974 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:57.974 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.974 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.974 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.910 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:58.910 [229/268] Linking static target lib/librte_vhost.a 00:01:59.169 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.544 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.808 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.741 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.741 [234/268] Linking target lib/librte_eal.so.24.1 00:02:06.741 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:06.741 [236/268] Linking target lib/librte_meter.so.24.1 00:02:06.741 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:06.741 [238/268] Linking target lib/librte_timer.so.24.1 00:02:06.741 [239/268] Linking target lib/librte_ring.so.24.1 00:02:06.741 [240/268] Linking target lib/librte_pci.so.24.1 00:02:06.741 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:06.999 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:06.999 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:06.999 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:06.999 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:06.999 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:06.999 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:06.999 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:06.999 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:06.999 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:06.999 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:07.258 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:07.258 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:07.258 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:07.258 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:07.258 [256/268] Linking target lib/librte_net.so.24.1 00:02:07.258 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:07.258 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:07.517 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:07.517 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:07.517 [261/268] Linking target lib/librte_hash.so.24.1 00:02:07.517 [262/268] Linking target lib/librte_security.so.24.1 00:02:07.517 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:07.517 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:07.517 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:07.517 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:07.776 [267/268] Linking target lib/librte_power.so.24.1 00:02:07.776 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:07.776 INFO: autodetecting backend as ninja 00:02:07.776 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:17.762 CC lib/log/log.o 00:02:17.762 CC lib/log/log_flags.o 00:02:17.762 CC lib/log/log_deprecated.o 00:02:17.762 CC lib/ut/ut.o 00:02:17.762 CC lib/ut_mock/mock.o 00:02:17.762 LIB libspdk_ut.a 00:02:17.762 LIB libspdk_log.a 00:02:17.762 SO libspdk_ut.so.2.0 00:02:17.762 SO libspdk_log.so.7.1 00:02:17.762 LIB libspdk_ut_mock.a 00:02:17.762 SYMLINK libspdk_ut.so 00:02:17.762 SO libspdk_ut_mock.so.6.0 00:02:17.762 SYMLINK libspdk_log.so 00:02:17.762 SYMLINK libspdk_ut_mock.so 00:02:17.762 CXX lib/trace_parser/trace.o 00:02:17.762 CC lib/ioat/ioat.o 00:02:17.762 CC lib/util/base64.o 00:02:17.762 CC lib/dma/dma.o 00:02:17.762 CC lib/util/bit_array.o 00:02:17.762 CC lib/util/cpuset.o 00:02:17.762 CC lib/util/crc16.o 00:02:17.762 CC lib/util/crc32.o 00:02:17.762 CC lib/util/crc32c.o 00:02:17.762 CC lib/util/crc32_ieee.o 00:02:17.762 CC lib/util/crc64.o 00:02:17.762 CC lib/util/dif.o 00:02:17.762 CC lib/util/fd.o 00:02:17.762 CC lib/util/fd_group.o 00:02:17.762 CC lib/util/iov.o 00:02:17.762 CC lib/util/file.o 00:02:17.762 CC lib/util/hexlify.o 00:02:17.762 CC lib/util/math.o 00:02:17.762 CC lib/util/pipe.o 00:02:17.762 CC lib/util/net.o 00:02:17.762 CC lib/util/strerror_tls.o 00:02:17.762 CC lib/util/string.o 00:02:17.762 CC lib/util/uuid.o 00:02:17.762 CC lib/util/xor.o 00:02:17.762 CC lib/util/zipf.o 00:02:17.762 CC lib/util/md5.o 00:02:17.762 CC lib/vfio_user/host/vfio_user_pci.o 00:02:17.762 CC lib/vfio_user/host/vfio_user.o 00:02:17.762 LIB libspdk_ioat.a 00:02:17.762 LIB libspdk_dma.a 00:02:17.762 SO libspdk_ioat.so.7.0 00:02:17.762 SO libspdk_dma.so.5.0 00:02:17.762 SYMLINK libspdk_ioat.so 00:02:17.762 SYMLINK libspdk_dma.so 00:02:17.762 LIB libspdk_vfio_user.a 00:02:17.763 SO libspdk_vfio_user.so.5.0 00:02:17.763 LIB libspdk_util.a 00:02:17.763 SYMLINK libspdk_vfio_user.so 00:02:17.763 SO libspdk_util.so.10.1 00:02:17.763 LIB libspdk_trace_parser.a 00:02:17.763 SYMLINK libspdk_util.so 00:02:17.763 SO libspdk_trace_parser.so.6.0 00:02:17.763 SYMLINK libspdk_trace_parser.so 00:02:17.763 CC lib/json/json_parse.o 00:02:17.763 CC lib/json/json_util.o 00:02:17.763 CC lib/json/json_write.o 00:02:17.763 CC lib/idxd/idxd_user.o 00:02:17.763 CC lib/rdma_utils/rdma_utils.o 00:02:17.763 CC lib/idxd/idxd.o 00:02:17.763 CC lib/idxd/idxd_kernel.o 00:02:17.763 CC lib/conf/conf.o 00:02:17.763 CC lib/vmd/vmd.o 00:02:17.763 CC lib/env_dpdk/env.o 00:02:17.763 CC lib/vmd/led.o 00:02:17.763 CC lib/env_dpdk/memory.o 00:02:17.763 CC lib/env_dpdk/pci.o 00:02:17.763 CC lib/env_dpdk/init.o 00:02:17.763 CC lib/env_dpdk/threads.o 00:02:17.763 CC lib/env_dpdk/pci_ioat.o 00:02:17.763 CC lib/env_dpdk/pci_virtio.o 00:02:17.763 CC lib/env_dpdk/pci_vmd.o 00:02:17.763 CC lib/env_dpdk/pci_idxd.o 00:02:17.763 CC lib/env_dpdk/pci_event.o 00:02:17.763 CC lib/env_dpdk/sigbus_handler.o 00:02:17.763 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:17.763 CC lib/env_dpdk/pci_dpdk.o 00:02:17.763 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:17.763 LIB libspdk_conf.a 00:02:17.763 LIB libspdk_rdma_utils.a 00:02:17.763 LIB libspdk_json.a 00:02:17.763 SO libspdk_conf.so.6.0 00:02:17.763 SO libspdk_rdma_utils.so.1.0 00:02:17.763 SO libspdk_json.so.6.0 00:02:17.763 SYMLINK libspdk_conf.so 00:02:18.021 SYMLINK libspdk_rdma_utils.so 00:02:18.021 SYMLINK libspdk_json.so 00:02:18.021 LIB libspdk_idxd.a 00:02:18.021 SO libspdk_idxd.so.12.1 00:02:18.021 LIB libspdk_vmd.a 00:02:18.021 SYMLINK libspdk_idxd.so 00:02:18.021 SO libspdk_vmd.so.6.0 00:02:18.021 CC lib/rdma_provider/common.o 00:02:18.021 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:18.280 SYMLINK libspdk_vmd.so 00:02:18.280 CC lib/jsonrpc/jsonrpc_server.o 00:02:18.280 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:18.280 CC lib/jsonrpc/jsonrpc_client.o 00:02:18.280 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:18.280 LIB libspdk_rdma_provider.a 00:02:18.280 SO libspdk_rdma_provider.so.7.0 00:02:18.280 LIB libspdk_jsonrpc.a 00:02:18.280 SYMLINK libspdk_rdma_provider.so 00:02:18.540 SO libspdk_jsonrpc.so.6.0 00:02:18.540 SYMLINK libspdk_jsonrpc.so 00:02:18.540 LIB libspdk_env_dpdk.a 00:02:18.540 SO libspdk_env_dpdk.so.15.1 00:02:18.540 SYMLINK libspdk_env_dpdk.so 00:02:18.800 CC lib/rpc/rpc.o 00:02:19.059 LIB libspdk_rpc.a 00:02:19.059 SO libspdk_rpc.so.6.0 00:02:19.059 SYMLINK libspdk_rpc.so 00:02:19.318 CC lib/keyring/keyring.o 00:02:19.318 CC lib/keyring/keyring_rpc.o 00:02:19.318 CC lib/notify/notify.o 00:02:19.318 CC lib/notify/notify_rpc.o 00:02:19.318 CC lib/trace/trace.o 00:02:19.318 CC lib/trace/trace_flags.o 00:02:19.318 CC lib/trace/trace_rpc.o 00:02:19.577 LIB libspdk_keyring.a 00:02:19.577 LIB libspdk_notify.a 00:02:19.577 SO libspdk_keyring.so.2.0 00:02:19.577 SO libspdk_notify.so.6.0 00:02:19.577 LIB libspdk_trace.a 00:02:19.577 SYMLINK libspdk_keyring.so 00:02:19.577 SO libspdk_trace.so.11.0 00:02:19.577 SYMLINK libspdk_notify.so 00:02:19.577 SYMLINK libspdk_trace.so 00:02:19.836 CC lib/sock/sock.o 00:02:19.836 CC lib/sock/sock_rpc.o 00:02:19.836 CC lib/thread/thread.o 00:02:19.836 CC lib/thread/iobuf.o 00:02:20.095 LIB libspdk_sock.a 00:02:20.355 SO libspdk_sock.so.10.0 00:02:20.355 SYMLINK libspdk_sock.so 00:02:20.614 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:20.614 CC lib/nvme/nvme_ctrlr.o 00:02:20.614 CC lib/nvme/nvme_fabric.o 00:02:20.614 CC lib/nvme/nvme_ns_cmd.o 00:02:20.614 CC lib/nvme/nvme_ns.o 00:02:20.614 CC lib/nvme/nvme_pcie_common.o 00:02:20.614 CC lib/nvme/nvme_pcie.o 00:02:20.614 CC lib/nvme/nvme_qpair.o 00:02:20.614 CC lib/nvme/nvme.o 00:02:20.614 CC lib/nvme/nvme_quirks.o 00:02:20.614 CC lib/nvme/nvme_transport.o 00:02:20.614 CC lib/nvme/nvme_discovery.o 00:02:20.614 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:20.614 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:20.614 CC lib/nvme/nvme_tcp.o 00:02:20.614 CC lib/nvme/nvme_opal.o 00:02:20.614 CC lib/nvme/nvme_zns.o 00:02:20.614 CC lib/nvme/nvme_io_msg.o 00:02:20.614 CC lib/nvme/nvme_poll_group.o 00:02:20.614 CC lib/nvme/nvme_stubs.o 00:02:20.614 CC lib/nvme/nvme_auth.o 00:02:20.614 CC lib/nvme/nvme_cuse.o 00:02:20.614 CC lib/nvme/nvme_rdma.o 00:02:20.873 LIB libspdk_thread.a 00:02:20.873 SO libspdk_thread.so.11.0 00:02:21.132 SYMLINK libspdk_thread.so 00:02:21.390 CC lib/init/json_config.o 00:02:21.390 CC lib/virtio/virtio.o 00:02:21.390 CC lib/init/subsystem.o 00:02:21.390 CC lib/virtio/virtio_vhost_user.o 00:02:21.390 CC lib/init/subsystem_rpc.o 00:02:21.390 CC lib/init/rpc.o 00:02:21.390 CC lib/virtio/virtio_pci.o 00:02:21.390 CC lib/virtio/virtio_vfio_user.o 00:02:21.390 CC lib/blob/blobstore.o 00:02:21.390 CC lib/fsdev/fsdev.o 00:02:21.390 CC lib/blob/request.o 00:02:21.390 CC lib/fsdev/fsdev_io.o 00:02:21.390 CC lib/accel/accel.o 00:02:21.390 CC lib/blob/zeroes.o 00:02:21.390 CC lib/fsdev/fsdev_rpc.o 00:02:21.390 CC lib/blob/blob_bs_dev.o 00:02:21.390 CC lib/accel/accel_rpc.o 00:02:21.390 CC lib/accel/accel_sw.o 00:02:21.390 LIB libspdk_init.a 00:02:21.390 SO libspdk_init.so.6.0 00:02:21.647 LIB libspdk_virtio.a 00:02:21.647 SYMLINK libspdk_init.so 00:02:21.647 SO libspdk_virtio.so.7.0 00:02:21.647 SYMLINK libspdk_virtio.so 00:02:21.647 LIB libspdk_fsdev.a 00:02:21.905 SO libspdk_fsdev.so.2.0 00:02:21.905 CC lib/event/app.o 00:02:21.905 CC lib/event/reactor.o 00:02:21.905 CC lib/event/log_rpc.o 00:02:21.905 CC lib/event/app_rpc.o 00:02:21.905 CC lib/event/scheduler_static.o 00:02:21.905 SYMLINK libspdk_fsdev.so 00:02:21.905 LIB libspdk_accel.a 00:02:22.164 SO libspdk_accel.so.16.0 00:02:22.164 LIB libspdk_nvme.a 00:02:22.164 LIB libspdk_event.a 00:02:22.164 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:22.164 SYMLINK libspdk_accel.so 00:02:22.164 SO libspdk_event.so.14.0 00:02:22.164 SO libspdk_nvme.so.15.0 00:02:22.164 SYMLINK libspdk_event.so 00:02:22.423 SYMLINK libspdk_nvme.so 00:02:22.423 CC lib/bdev/bdev_zone.o 00:02:22.423 CC lib/bdev/bdev.o 00:02:22.423 CC lib/bdev/bdev_rpc.o 00:02:22.423 CC lib/bdev/scsi_nvme.o 00:02:22.423 CC lib/bdev/part.o 00:02:22.423 LIB libspdk_fuse_dispatcher.a 00:02:22.681 SO libspdk_fuse_dispatcher.so.1.0 00:02:22.682 SYMLINK libspdk_fuse_dispatcher.so 00:02:23.253 LIB libspdk_blob.a 00:02:23.254 SO libspdk_blob.so.12.0 00:02:23.254 SYMLINK libspdk_blob.so 00:02:23.512 CC lib/blobfs/tree.o 00:02:23.512 CC lib/blobfs/blobfs.o 00:02:23.512 CC lib/lvol/lvol.o 00:02:24.077 LIB libspdk_bdev.a 00:02:24.077 LIB libspdk_blobfs.a 00:02:24.077 SO libspdk_bdev.so.17.0 00:02:24.077 SO libspdk_blobfs.so.11.0 00:02:24.077 LIB libspdk_lvol.a 00:02:24.335 SYMLINK libspdk_bdev.so 00:02:24.335 SYMLINK libspdk_blobfs.so 00:02:24.335 SO libspdk_lvol.so.11.0 00:02:24.335 SYMLINK libspdk_lvol.so 00:02:24.594 CC lib/scsi/dev.o 00:02:24.594 CC lib/scsi/lun.o 00:02:24.594 CC lib/scsi/port.o 00:02:24.594 CC lib/scsi/scsi_bdev.o 00:02:24.594 CC lib/scsi/scsi.o 00:02:24.594 CC lib/scsi/scsi_pr.o 00:02:24.594 CC lib/scsi/scsi_rpc.o 00:02:24.594 CC lib/scsi/task.o 00:02:24.594 CC lib/nbd/nbd.o 00:02:24.594 CC lib/nbd/nbd_rpc.o 00:02:24.594 CC lib/nvmf/ctrlr.o 00:02:24.594 CC lib/nvmf/ctrlr_discovery.o 00:02:24.594 CC lib/ftl/ftl_core.o 00:02:24.594 CC lib/nvmf/ctrlr_bdev.o 00:02:24.594 CC lib/nvmf/subsystem.o 00:02:24.594 CC lib/nvmf/nvmf.o 00:02:24.594 CC lib/ftl/ftl_debug.o 00:02:24.594 CC lib/ftl/ftl_init.o 00:02:24.594 CC lib/nvmf/nvmf_rpc.o 00:02:24.594 CC lib/ftl/ftl_layout.o 00:02:24.594 CC lib/nvmf/transport.o 00:02:24.594 CC lib/nvmf/tcp.o 00:02:24.594 CC lib/ftl/ftl_io.o 00:02:24.594 CC lib/nvmf/stubs.o 00:02:24.594 CC lib/ftl/ftl_sb.o 00:02:24.594 CC lib/nvmf/mdns_server.o 00:02:24.594 CC lib/ublk/ublk.o 00:02:24.594 CC lib/ftl/ftl_l2p.o 00:02:24.594 CC lib/nvmf/rdma.o 00:02:24.594 CC lib/ftl/ftl_l2p_flat.o 00:02:24.594 CC lib/ublk/ublk_rpc.o 00:02:24.594 CC lib/nvmf/auth.o 00:02:24.594 CC lib/ftl/ftl_nv_cache.o 00:02:24.594 CC lib/ftl/ftl_band.o 00:02:24.594 CC lib/ftl/ftl_band_ops.o 00:02:24.594 CC lib/ftl/ftl_writer.o 00:02:24.594 CC lib/ftl/ftl_rq.o 00:02:24.594 CC lib/ftl/ftl_reloc.o 00:02:24.594 CC lib/ftl/ftl_l2p_cache.o 00:02:24.594 CC lib/ftl/ftl_p2l.o 00:02:24.594 CC lib/ftl/ftl_p2l_log.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:24.594 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:24.594 CC lib/ftl/utils/ftl_conf.o 00:02:24.594 CC lib/ftl/utils/ftl_md.o 00:02:24.594 CC lib/ftl/utils/ftl_mempool.o 00:02:24.594 CC lib/ftl/utils/ftl_bitmap.o 00:02:24.594 CC lib/ftl/utils/ftl_property.o 00:02:24.594 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:24.594 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:24.594 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:24.594 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:24.594 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:24.594 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:24.594 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:24.594 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:24.594 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:24.594 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:24.594 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:24.594 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:24.594 CC lib/ftl/base/ftl_base_dev.o 00:02:24.594 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:24.594 CC lib/ftl/base/ftl_base_bdev.o 00:02:24.594 CC lib/ftl/ftl_trace.o 00:02:24.853 LIB libspdk_nbd.a 00:02:24.853 SO libspdk_nbd.so.7.0 00:02:25.111 SYMLINK libspdk_nbd.so 00:02:25.111 LIB libspdk_scsi.a 00:02:25.111 SO libspdk_scsi.so.9.0 00:02:25.111 LIB libspdk_ublk.a 00:02:25.111 SO libspdk_ublk.so.3.0 00:02:25.111 SYMLINK libspdk_scsi.so 00:02:25.111 SYMLINK libspdk_ublk.so 00:02:25.370 LIB libspdk_ftl.a 00:02:25.370 SO libspdk_ftl.so.9.0 00:02:25.370 CC lib/iscsi/conn.o 00:02:25.370 CC lib/iscsi/init_grp.o 00:02:25.370 CC lib/iscsi/iscsi.o 00:02:25.370 CC lib/iscsi/portal_grp.o 00:02:25.370 CC lib/iscsi/param.o 00:02:25.370 CC lib/iscsi/iscsi_subsystem.o 00:02:25.370 CC lib/iscsi/tgt_node.o 00:02:25.370 CC lib/iscsi/task.o 00:02:25.370 CC lib/iscsi/iscsi_rpc.o 00:02:25.370 CC lib/vhost/vhost.o 00:02:25.370 CC lib/vhost/vhost_rpc.o 00:02:25.370 CC lib/vhost/vhost_scsi.o 00:02:25.370 CC lib/vhost/vhost_blk.o 00:02:25.370 CC lib/vhost/rte_vhost_user.o 00:02:25.628 SYMLINK libspdk_ftl.so 00:02:26.192 LIB libspdk_nvmf.a 00:02:26.193 SO libspdk_nvmf.so.20.0 00:02:26.193 LIB libspdk_vhost.a 00:02:26.193 SO libspdk_vhost.so.8.0 00:02:26.193 SYMLINK libspdk_nvmf.so 00:02:26.193 SYMLINK libspdk_vhost.so 00:02:26.451 LIB libspdk_iscsi.a 00:02:26.451 SO libspdk_iscsi.so.8.0 00:02:26.451 SYMLINK libspdk_iscsi.so 00:02:27.018 CC module/env_dpdk/env_dpdk_rpc.o 00:02:27.018 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:27.018 CC module/scheduler/gscheduler/gscheduler.o 00:02:27.018 CC module/blob/bdev/blob_bdev.o 00:02:27.018 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:27.018 CC module/accel/dsa/accel_dsa.o 00:02:27.018 CC module/accel/dsa/accel_dsa_rpc.o 00:02:27.018 LIB libspdk_env_dpdk_rpc.a 00:02:27.018 CC module/accel/error/accel_error.o 00:02:27.018 CC module/accel/error/accel_error_rpc.o 00:02:27.018 CC module/accel/ioat/accel_ioat.o 00:02:27.018 CC module/accel/ioat/accel_ioat_rpc.o 00:02:27.018 CC module/sock/posix/posix.o 00:02:27.018 CC module/accel/iaa/accel_iaa_rpc.o 00:02:27.018 CC module/accel/iaa/accel_iaa.o 00:02:27.018 CC module/keyring/linux/keyring.o 00:02:27.018 CC module/fsdev/aio/fsdev_aio.o 00:02:27.018 SO libspdk_env_dpdk_rpc.so.6.0 00:02:27.018 CC module/keyring/linux/keyring_rpc.o 00:02:27.018 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:27.018 CC module/fsdev/aio/linux_aio_mgr.o 00:02:27.018 CC module/keyring/file/keyring.o 00:02:27.018 CC module/keyring/file/keyring_rpc.o 00:02:27.277 SYMLINK libspdk_env_dpdk_rpc.so 00:02:27.277 LIB libspdk_scheduler_dpdk_governor.a 00:02:27.277 LIB libspdk_scheduler_gscheduler.a 00:02:27.277 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:27.277 SO libspdk_scheduler_gscheduler.so.4.0 00:02:27.277 LIB libspdk_keyring_linux.a 00:02:27.277 LIB libspdk_accel_ioat.a 00:02:27.277 LIB libspdk_scheduler_dynamic.a 00:02:27.277 LIB libspdk_accel_error.a 00:02:27.277 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:27.277 LIB libspdk_keyring_file.a 00:02:27.277 SO libspdk_keyring_linux.so.1.0 00:02:27.277 SYMLINK libspdk_scheduler_gscheduler.so 00:02:27.277 SO libspdk_accel_error.so.2.0 00:02:27.277 SO libspdk_scheduler_dynamic.so.4.0 00:02:27.277 SO libspdk_accel_ioat.so.6.0 00:02:27.277 LIB libspdk_accel_iaa.a 00:02:27.277 SO libspdk_keyring_file.so.2.0 00:02:27.277 LIB libspdk_blob_bdev.a 00:02:27.277 LIB libspdk_accel_dsa.a 00:02:27.277 SYMLINK libspdk_keyring_linux.so 00:02:27.277 SO libspdk_accel_iaa.so.3.0 00:02:27.277 SYMLINK libspdk_accel_error.so 00:02:27.277 SYMLINK libspdk_scheduler_dynamic.so 00:02:27.277 SO libspdk_blob_bdev.so.12.0 00:02:27.277 SYMLINK libspdk_keyring_file.so 00:02:27.277 SYMLINK libspdk_accel_ioat.so 00:02:27.277 SO libspdk_accel_dsa.so.5.0 00:02:27.277 SYMLINK libspdk_accel_iaa.so 00:02:27.536 SYMLINK libspdk_blob_bdev.so 00:02:27.536 SYMLINK libspdk_accel_dsa.so 00:02:27.536 LIB libspdk_fsdev_aio.a 00:02:27.536 SO libspdk_fsdev_aio.so.1.0 00:02:27.794 LIB libspdk_sock_posix.a 00:02:27.794 SYMLINK libspdk_fsdev_aio.so 00:02:27.794 SO libspdk_sock_posix.so.6.0 00:02:27.794 SYMLINK libspdk_sock_posix.so 00:02:27.794 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:27.794 CC module/blobfs/bdev/blobfs_bdev.o 00:02:27.794 CC module/bdev/aio/bdev_aio.o 00:02:27.794 CC module/bdev/raid/bdev_raid.o 00:02:27.794 CC module/bdev/aio/bdev_aio_rpc.o 00:02:27.794 CC module/bdev/error/vbdev_error.o 00:02:27.794 CC module/bdev/raid/bdev_raid_sb.o 00:02:27.794 CC module/bdev/raid/bdev_raid_rpc.o 00:02:27.794 CC module/bdev/error/vbdev_error_rpc.o 00:02:27.794 CC module/bdev/raid/raid0.o 00:02:27.794 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:27.794 CC module/bdev/gpt/gpt.o 00:02:27.794 CC module/bdev/lvol/vbdev_lvol.o 00:02:27.794 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:27.794 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:27.794 CC module/bdev/raid/raid1.o 00:02:27.794 CC module/bdev/passthru/vbdev_passthru.o 00:02:27.794 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:27.794 CC module/bdev/raid/concat.o 00:02:27.794 CC module/bdev/gpt/vbdev_gpt.o 00:02:27.794 CC module/bdev/ftl/bdev_ftl.o 00:02:27.794 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:27.794 CC module/bdev/delay/vbdev_delay.o 00:02:27.794 CC module/bdev/malloc/bdev_malloc.o 00:02:27.794 CC module/bdev/iscsi/bdev_iscsi.o 00:02:27.794 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:27.794 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:27.794 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:27.794 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:27.794 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:27.794 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:27.794 CC module/bdev/nvme/bdev_nvme.o 00:02:27.794 CC module/bdev/null/bdev_null.o 00:02:27.794 CC module/bdev/split/vbdev_split.o 00:02:27.794 CC module/bdev/null/bdev_null_rpc.o 00:02:27.794 CC module/bdev/nvme/nvme_rpc.o 00:02:27.794 CC module/bdev/split/vbdev_split_rpc.o 00:02:27.794 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:27.794 CC module/bdev/nvme/vbdev_opal.o 00:02:27.794 CC module/bdev/nvme/bdev_mdns_client.o 00:02:27.794 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:27.794 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:28.053 LIB libspdk_blobfs_bdev.a 00:02:28.053 LIB libspdk_bdev_error.a 00:02:28.053 SO libspdk_blobfs_bdev.so.6.0 00:02:28.053 LIB libspdk_bdev_split.a 00:02:28.053 LIB libspdk_bdev_ftl.a 00:02:28.053 SO libspdk_bdev_error.so.6.0 00:02:28.053 LIB libspdk_bdev_passthru.a 00:02:28.053 LIB libspdk_bdev_null.a 00:02:28.053 LIB libspdk_bdev_gpt.a 00:02:28.053 SO libspdk_bdev_ftl.so.6.0 00:02:28.053 LIB libspdk_bdev_aio.a 00:02:28.053 SO libspdk_bdev_split.so.6.0 00:02:28.053 LIB libspdk_bdev_zone_block.a 00:02:28.312 SO libspdk_bdev_passthru.so.6.0 00:02:28.312 SO libspdk_bdev_null.so.6.0 00:02:28.312 SYMLINK libspdk_blobfs_bdev.so 00:02:28.312 SO libspdk_bdev_aio.so.6.0 00:02:28.312 SO libspdk_bdev_gpt.so.6.0 00:02:28.312 SYMLINK libspdk_bdev_error.so 00:02:28.312 LIB libspdk_bdev_delay.a 00:02:28.312 SO libspdk_bdev_zone_block.so.6.0 00:02:28.312 LIB libspdk_bdev_malloc.a 00:02:28.312 LIB libspdk_bdev_iscsi.a 00:02:28.312 SYMLINK libspdk_bdev_ftl.so 00:02:28.312 SO libspdk_bdev_delay.so.6.0 00:02:28.312 SYMLINK libspdk_bdev_passthru.so 00:02:28.312 SYMLINK libspdk_bdev_split.so 00:02:28.312 SO libspdk_bdev_malloc.so.6.0 00:02:28.312 SYMLINK libspdk_bdev_aio.so 00:02:28.312 SYMLINK libspdk_bdev_null.so 00:02:28.312 SO libspdk_bdev_iscsi.so.6.0 00:02:28.312 SYMLINK libspdk_bdev_gpt.so 00:02:28.312 SYMLINK libspdk_bdev_zone_block.so 00:02:28.312 SYMLINK libspdk_bdev_delay.so 00:02:28.312 SYMLINK libspdk_bdev_malloc.so 00:02:28.312 LIB libspdk_bdev_lvol.a 00:02:28.312 SYMLINK libspdk_bdev_iscsi.so 00:02:28.312 LIB libspdk_bdev_virtio.a 00:02:28.312 SO libspdk_bdev_lvol.so.6.0 00:02:28.312 SO libspdk_bdev_virtio.so.6.0 00:02:28.312 SYMLINK libspdk_bdev_lvol.so 00:02:28.312 SYMLINK libspdk_bdev_virtio.so 00:02:28.570 LIB libspdk_bdev_raid.a 00:02:28.570 SO libspdk_bdev_raid.so.6.0 00:02:28.829 SYMLINK libspdk_bdev_raid.so 00:02:29.765 LIB libspdk_bdev_nvme.a 00:02:29.765 SO libspdk_bdev_nvme.so.7.1 00:02:29.765 SYMLINK libspdk_bdev_nvme.so 00:02:30.334 CC module/event/subsystems/scheduler/scheduler.o 00:02:30.334 CC module/event/subsystems/sock/sock.o 00:02:30.334 CC module/event/subsystems/iobuf/iobuf.o 00:02:30.334 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:30.334 CC module/event/subsystems/keyring/keyring.o 00:02:30.334 CC module/event/subsystems/vmd/vmd.o 00:02:30.334 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:30.334 CC module/event/subsystems/fsdev/fsdev.o 00:02:30.334 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:30.334 LIB libspdk_event_sock.a 00:02:30.334 LIB libspdk_event_scheduler.a 00:02:30.334 LIB libspdk_event_iobuf.a 00:02:30.334 LIB libspdk_event_vhost_blk.a 00:02:30.334 LIB libspdk_event_keyring.a 00:02:30.334 SO libspdk_event_scheduler.so.4.0 00:02:30.334 SO libspdk_event_iobuf.so.3.0 00:02:30.334 SO libspdk_event_sock.so.5.0 00:02:30.334 LIB libspdk_event_fsdev.a 00:02:30.334 LIB libspdk_event_vmd.a 00:02:30.593 SO libspdk_event_vhost_blk.so.3.0 00:02:30.593 SO libspdk_event_keyring.so.1.0 00:02:30.593 SO libspdk_event_fsdev.so.1.0 00:02:30.593 SO libspdk_event_vmd.so.6.0 00:02:30.593 SYMLINK libspdk_event_sock.so 00:02:30.593 SYMLINK libspdk_event_iobuf.so 00:02:30.593 SYMLINK libspdk_event_scheduler.so 00:02:30.593 SYMLINK libspdk_event_keyring.so 00:02:30.593 SYMLINK libspdk_event_vhost_blk.so 00:02:30.593 SYMLINK libspdk_event_fsdev.so 00:02:30.593 SYMLINK libspdk_event_vmd.so 00:02:30.852 CC module/event/subsystems/accel/accel.o 00:02:30.852 LIB libspdk_event_accel.a 00:02:30.852 SO libspdk_event_accel.so.6.0 00:02:31.111 SYMLINK libspdk_event_accel.so 00:02:31.370 CC module/event/subsystems/bdev/bdev.o 00:02:31.370 LIB libspdk_event_bdev.a 00:02:31.629 SO libspdk_event_bdev.so.6.0 00:02:31.629 SYMLINK libspdk_event_bdev.so 00:02:31.887 CC module/event/subsystems/ublk/ublk.o 00:02:31.887 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:31.887 CC module/event/subsystems/scsi/scsi.o 00:02:31.887 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:31.887 CC module/event/subsystems/nbd/nbd.o 00:02:31.887 LIB libspdk_event_ublk.a 00:02:31.887 LIB libspdk_event_nbd.a 00:02:31.887 LIB libspdk_event_scsi.a 00:02:31.887 SO libspdk_event_ublk.so.3.0 00:02:32.146 LIB libspdk_event_nvmf.a 00:02:32.146 SO libspdk_event_nbd.so.6.0 00:02:32.146 SO libspdk_event_scsi.so.6.0 00:02:32.146 SO libspdk_event_nvmf.so.6.0 00:02:32.146 SYMLINK libspdk_event_ublk.so 00:02:32.146 SYMLINK libspdk_event_scsi.so 00:02:32.146 SYMLINK libspdk_event_nbd.so 00:02:32.146 SYMLINK libspdk_event_nvmf.so 00:02:32.405 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:32.405 CC module/event/subsystems/iscsi/iscsi.o 00:02:32.405 LIB libspdk_event_vhost_scsi.a 00:02:32.663 LIB libspdk_event_iscsi.a 00:02:32.663 SO libspdk_event_vhost_scsi.so.3.0 00:02:32.663 SO libspdk_event_iscsi.so.6.0 00:02:32.663 SYMLINK libspdk_event_vhost_scsi.so 00:02:32.663 SYMLINK libspdk_event_iscsi.so 00:02:32.921 SO libspdk.so.6.0 00:02:32.921 SYMLINK libspdk.so 00:02:33.192 CXX app/trace/trace.o 00:02:33.192 TEST_HEADER include/spdk/accel.h 00:02:33.192 TEST_HEADER include/spdk/accel_module.h 00:02:33.192 TEST_HEADER include/spdk/base64.h 00:02:33.192 TEST_HEADER include/spdk/assert.h 00:02:33.192 TEST_HEADER include/spdk/barrier.h 00:02:33.192 TEST_HEADER include/spdk/bdev.h 00:02:33.192 TEST_HEADER include/spdk/bdev_module.h 00:02:33.192 TEST_HEADER include/spdk/bdev_zone.h 00:02:33.192 TEST_HEADER include/spdk/bit_array.h 00:02:33.192 TEST_HEADER include/spdk/bit_pool.h 00:02:33.192 TEST_HEADER include/spdk/blob_bdev.h 00:02:33.192 CC test/rpc_client/rpc_client_test.o 00:02:33.192 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:33.192 TEST_HEADER include/spdk/blobfs.h 00:02:33.192 CC app/trace_record/trace_record.o 00:02:33.192 TEST_HEADER include/spdk/blob.h 00:02:33.192 TEST_HEADER include/spdk/config.h 00:02:33.192 TEST_HEADER include/spdk/conf.h 00:02:33.192 TEST_HEADER include/spdk/cpuset.h 00:02:33.192 TEST_HEADER include/spdk/crc16.h 00:02:33.192 TEST_HEADER include/spdk/crc32.h 00:02:33.192 TEST_HEADER include/spdk/dif.h 00:02:33.192 TEST_HEADER include/spdk/crc64.h 00:02:33.192 CC app/spdk_lspci/spdk_lspci.o 00:02:33.192 TEST_HEADER include/spdk/endian.h 00:02:33.192 TEST_HEADER include/spdk/dma.h 00:02:33.192 CC app/spdk_nvme_discover/discovery_aer.o 00:02:33.192 TEST_HEADER include/spdk/env_dpdk.h 00:02:33.192 TEST_HEADER include/spdk/event.h 00:02:33.192 TEST_HEADER include/spdk/env.h 00:02:33.192 TEST_HEADER include/spdk/fd_group.h 00:02:33.192 TEST_HEADER include/spdk/fd.h 00:02:33.192 TEST_HEADER include/spdk/fsdev.h 00:02:33.192 TEST_HEADER include/spdk/file.h 00:02:33.192 CC app/spdk_top/spdk_top.o 00:02:33.192 TEST_HEADER include/spdk/fsdev_module.h 00:02:33.192 TEST_HEADER include/spdk/ftl.h 00:02:33.192 TEST_HEADER include/spdk/gpt_spec.h 00:02:33.192 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:33.192 CC app/spdk_nvme_perf/perf.o 00:02:33.192 TEST_HEADER include/spdk/hexlify.h 00:02:33.192 CC app/spdk_nvme_identify/identify.o 00:02:33.192 TEST_HEADER include/spdk/histogram_data.h 00:02:33.192 TEST_HEADER include/spdk/idxd_spec.h 00:02:33.192 TEST_HEADER include/spdk/ioat.h 00:02:33.192 TEST_HEADER include/spdk/idxd.h 00:02:33.192 TEST_HEADER include/spdk/init.h 00:02:33.192 TEST_HEADER include/spdk/ioat_spec.h 00:02:33.192 TEST_HEADER include/spdk/json.h 00:02:33.192 TEST_HEADER include/spdk/iscsi_spec.h 00:02:33.192 TEST_HEADER include/spdk/jsonrpc.h 00:02:33.192 TEST_HEADER include/spdk/keyring.h 00:02:33.192 TEST_HEADER include/spdk/keyring_module.h 00:02:33.192 TEST_HEADER include/spdk/likely.h 00:02:33.192 TEST_HEADER include/spdk/lvol.h 00:02:33.192 TEST_HEADER include/spdk/log.h 00:02:33.192 TEST_HEADER include/spdk/md5.h 00:02:33.192 TEST_HEADER include/spdk/mmio.h 00:02:33.192 TEST_HEADER include/spdk/memory.h 00:02:33.192 TEST_HEADER include/spdk/nbd.h 00:02:33.192 TEST_HEADER include/spdk/net.h 00:02:33.192 TEST_HEADER include/spdk/nvme_intel.h 00:02:33.192 TEST_HEADER include/spdk/nvme.h 00:02:33.192 TEST_HEADER include/spdk/notify.h 00:02:33.192 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:33.192 TEST_HEADER include/spdk/nvme_spec.h 00:02:33.192 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:33.192 CC app/nvmf_tgt/nvmf_main.o 00:02:33.192 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:33.193 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:33.193 TEST_HEADER include/spdk/nvme_zns.h 00:02:33.193 TEST_HEADER include/spdk/nvmf_spec.h 00:02:33.193 TEST_HEADER include/spdk/nvmf_transport.h 00:02:33.193 TEST_HEADER include/spdk/nvmf.h 00:02:33.193 TEST_HEADER include/spdk/opal_spec.h 00:02:33.193 CC app/iscsi_tgt/iscsi_tgt.o 00:02:33.193 TEST_HEADER include/spdk/pci_ids.h 00:02:33.193 TEST_HEADER include/spdk/opal.h 00:02:33.193 TEST_HEADER include/spdk/pipe.h 00:02:33.193 TEST_HEADER include/spdk/queue.h 00:02:33.193 TEST_HEADER include/spdk/reduce.h 00:02:33.193 TEST_HEADER include/spdk/rpc.h 00:02:33.193 TEST_HEADER include/spdk/scheduler.h 00:02:33.193 TEST_HEADER include/spdk/scsi_spec.h 00:02:33.193 TEST_HEADER include/spdk/scsi.h 00:02:33.193 CC app/spdk_tgt/spdk_tgt.o 00:02:33.193 TEST_HEADER include/spdk/stdinc.h 00:02:33.193 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:33.193 TEST_HEADER include/spdk/sock.h 00:02:33.193 TEST_HEADER include/spdk/string.h 00:02:33.193 TEST_HEADER include/spdk/thread.h 00:02:33.193 TEST_HEADER include/spdk/trace.h 00:02:33.193 TEST_HEADER include/spdk/trace_parser.h 00:02:33.193 TEST_HEADER include/spdk/ublk.h 00:02:33.193 TEST_HEADER include/spdk/util.h 00:02:33.193 TEST_HEADER include/spdk/tree.h 00:02:33.193 TEST_HEADER include/spdk/version.h 00:02:33.193 TEST_HEADER include/spdk/uuid.h 00:02:33.193 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:33.193 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:33.193 TEST_HEADER include/spdk/vhost.h 00:02:33.193 TEST_HEADER include/spdk/vmd.h 00:02:33.193 CC app/spdk_dd/spdk_dd.o 00:02:33.193 TEST_HEADER include/spdk/xor.h 00:02:33.193 TEST_HEADER include/spdk/zipf.h 00:02:33.193 CXX test/cpp_headers/accel_module.o 00:02:33.193 CXX test/cpp_headers/accel.o 00:02:33.193 CXX test/cpp_headers/assert.o 00:02:33.193 CXX test/cpp_headers/base64.o 00:02:33.193 CXX test/cpp_headers/barrier.o 00:02:33.193 CXX test/cpp_headers/bdev_module.o 00:02:33.193 CXX test/cpp_headers/bdev.o 00:02:33.193 CXX test/cpp_headers/bdev_zone.o 00:02:33.193 CXX test/cpp_headers/blob_bdev.o 00:02:33.193 CXX test/cpp_headers/blobfs.o 00:02:33.193 CXX test/cpp_headers/bit_array.o 00:02:33.193 CXX test/cpp_headers/blob.o 00:02:33.193 CXX test/cpp_headers/bit_pool.o 00:02:33.193 CXX test/cpp_headers/blobfs_bdev.o 00:02:33.193 CXX test/cpp_headers/config.o 00:02:33.193 CXX test/cpp_headers/crc16.o 00:02:33.193 CXX test/cpp_headers/conf.o 00:02:33.193 CXX test/cpp_headers/crc32.o 00:02:33.193 CXX test/cpp_headers/cpuset.o 00:02:33.193 CXX test/cpp_headers/dif.o 00:02:33.193 CXX test/cpp_headers/dma.o 00:02:33.193 CXX test/cpp_headers/crc64.o 00:02:33.193 CXX test/cpp_headers/env_dpdk.o 00:02:33.193 CXX test/cpp_headers/env.o 00:02:33.193 CXX test/cpp_headers/fd_group.o 00:02:33.193 CXX test/cpp_headers/event.o 00:02:33.193 CXX test/cpp_headers/fd.o 00:02:33.193 CXX test/cpp_headers/file.o 00:02:33.193 CXX test/cpp_headers/fsdev_module.o 00:02:33.193 CXX test/cpp_headers/endian.o 00:02:33.193 CXX test/cpp_headers/fsdev.o 00:02:33.193 CXX test/cpp_headers/fuse_dispatcher.o 00:02:33.193 CXX test/cpp_headers/ftl.o 00:02:33.193 CXX test/cpp_headers/hexlify.o 00:02:33.193 CXX test/cpp_headers/gpt_spec.o 00:02:33.193 CXX test/cpp_headers/histogram_data.o 00:02:33.193 CXX test/cpp_headers/idxd_spec.o 00:02:33.193 CXX test/cpp_headers/idxd.o 00:02:33.193 CXX test/cpp_headers/ioat_spec.o 00:02:33.193 CXX test/cpp_headers/json.o 00:02:33.193 CXX test/cpp_headers/ioat.o 00:02:33.193 CXX test/cpp_headers/init.o 00:02:33.193 CXX test/cpp_headers/keyring.o 00:02:33.193 CXX test/cpp_headers/jsonrpc.o 00:02:33.193 CXX test/cpp_headers/iscsi_spec.o 00:02:33.193 CXX test/cpp_headers/keyring_module.o 00:02:33.193 CXX test/cpp_headers/likely.o 00:02:33.193 CXX test/cpp_headers/lvol.o 00:02:33.193 CXX test/cpp_headers/mmio.o 00:02:33.193 CXX test/cpp_headers/memory.o 00:02:33.193 CXX test/cpp_headers/log.o 00:02:33.193 CXX test/cpp_headers/net.o 00:02:33.193 CXX test/cpp_headers/md5.o 00:02:33.193 CXX test/cpp_headers/nbd.o 00:02:33.193 CXX test/cpp_headers/notify.o 00:02:33.193 CXX test/cpp_headers/nvme.o 00:02:33.193 CXX test/cpp_headers/nvme_intel.o 00:02:33.193 CXX test/cpp_headers/nvme_ocssd.o 00:02:33.193 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:33.193 CXX test/cpp_headers/nvme_zns.o 00:02:33.193 CXX test/cpp_headers/nvme_spec.o 00:02:33.193 CXX test/cpp_headers/nvmf_cmd.o 00:02:33.193 CXX test/cpp_headers/nvmf.o 00:02:33.193 CXX test/cpp_headers/nvmf_transport.o 00:02:33.193 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:33.193 CXX test/cpp_headers/nvmf_spec.o 00:02:33.193 CXX test/cpp_headers/opal.o 00:02:33.193 CXX test/cpp_headers/opal_spec.o 00:02:33.193 CXX test/cpp_headers/pci_ids.o 00:02:33.193 CXX test/cpp_headers/pipe.o 00:02:33.193 CXX test/cpp_headers/queue.o 00:02:33.193 CXX test/cpp_headers/rpc.o 00:02:33.193 CC examples/ioat/verify/verify.o 00:02:33.193 CXX test/cpp_headers/reduce.o 00:02:33.193 CXX test/cpp_headers/scheduler.o 00:02:33.193 CXX test/cpp_headers/scsi.o 00:02:33.193 CXX test/cpp_headers/scsi_spec.o 00:02:33.193 CC examples/ioat/perf/perf.o 00:02:33.193 CXX test/cpp_headers/sock.o 00:02:33.193 CXX test/cpp_headers/string.o 00:02:33.193 CC examples/util/zipf/zipf.o 00:02:33.193 CXX test/cpp_headers/stdinc.o 00:02:33.193 CXX test/cpp_headers/thread.o 00:02:33.193 CXX test/cpp_headers/trace.o 00:02:33.193 CC test/env/vtophys/vtophys.o 00:02:33.193 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:33.193 CC test/thread/poller_perf/poller_perf.o 00:02:33.193 CXX test/cpp_headers/trace_parser.o 00:02:33.193 CXX test/cpp_headers/tree.o 00:02:33.468 CC test/env/memory/memory_ut.o 00:02:33.468 CC test/env/pci/pci_ut.o 00:02:33.468 CXX test/cpp_headers/ublk.o 00:02:33.468 CC test/app/jsoncat/jsoncat.o 00:02:33.468 CC test/app/histogram_perf/histogram_perf.o 00:02:33.468 CXX test/cpp_headers/util.o 00:02:33.468 CC test/app/stub/stub.o 00:02:33.468 CC app/fio/nvme/fio_plugin.o 00:02:33.468 CC test/app/bdev_svc/bdev_svc.o 00:02:33.468 CC test/dma/test_dma/test_dma.o 00:02:33.468 CC app/fio/bdev/fio_plugin.o 00:02:33.468 CXX test/cpp_headers/uuid.o 00:02:33.733 LINK spdk_lspci 00:02:33.733 LINK iscsi_tgt 00:02:33.733 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:33.733 LINK interrupt_tgt 00:02:33.733 CXX test/cpp_headers/version.o 00:02:33.733 LINK rpc_client_test 00:02:33.733 CXX test/cpp_headers/vfio_user_pci.o 00:02:33.733 LINK poller_perf 00:02:33.733 CXX test/cpp_headers/vfio_user_spec.o 00:02:33.733 CXX test/cpp_headers/vhost.o 00:02:33.733 CXX test/cpp_headers/vmd.o 00:02:33.733 CXX test/cpp_headers/xor.o 00:02:33.733 CXX test/cpp_headers/zipf.o 00:02:33.996 LINK jsoncat 00:02:33.996 CC test/env/mem_callbacks/mem_callbacks.o 00:02:33.996 LINK vtophys 00:02:33.996 LINK nvmf_tgt 00:02:33.996 LINK ioat_perf 00:02:33.996 LINK spdk_nvme_discover 00:02:33.996 LINK bdev_svc 00:02:33.996 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:33.996 LINK spdk_trace 00:02:33.996 LINK spdk_tgt 00:02:33.996 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:33.996 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:33.996 LINK spdk_dd 00:02:33.996 LINK spdk_trace_record 00:02:33.996 LINK zipf 00:02:33.996 LINK histogram_perf 00:02:33.996 LINK env_dpdk_post_init 00:02:33.996 LINK stub 00:02:33.996 LINK verify 00:02:33.996 LINK pci_ut 00:02:34.254 LINK nvme_fuzz 00:02:34.254 LINK spdk_nvme 00:02:34.254 LINK spdk_bdev 00:02:34.254 LINK spdk_nvme_perf 00:02:34.254 CC app/vhost/vhost.o 00:02:34.254 LINK test_dma 00:02:34.254 LINK spdk_nvme_identify 00:02:34.254 CC test/event/reactor_perf/reactor_perf.o 00:02:34.254 CC test/event/event_perf/event_perf.o 00:02:34.254 LINK vhost_fuzz 00:02:34.514 CC test/event/reactor/reactor.o 00:02:34.514 CC test/event/scheduler/scheduler.o 00:02:34.514 CC test/event/app_repeat/app_repeat.o 00:02:34.514 LINK mem_callbacks 00:02:34.514 CC examples/idxd/perf/perf.o 00:02:34.514 CC examples/sock/hello_world/hello_sock.o 00:02:34.514 CC examples/vmd/lsvmd/lsvmd.o 00:02:34.514 CC examples/vmd/led/led.o 00:02:34.514 CC examples/thread/thread/thread_ex.o 00:02:34.514 LINK vhost 00:02:34.514 LINK spdk_top 00:02:34.514 LINK reactor_perf 00:02:34.514 LINK reactor 00:02:34.514 LINK event_perf 00:02:34.514 LINK lsvmd 00:02:34.514 LINK app_repeat 00:02:34.514 LINK led 00:02:34.514 LINK memory_ut 00:02:34.514 LINK scheduler 00:02:34.773 LINK hello_sock 00:02:34.773 LINK idxd_perf 00:02:34.773 LINK thread 00:02:34.773 CC test/nvme/e2edp/nvme_dp.o 00:02:34.773 CC test/nvme/startup/startup.o 00:02:34.773 CC test/nvme/aer/aer.o 00:02:34.773 CC test/nvme/fdp/fdp.o 00:02:34.773 CC test/nvme/overhead/overhead.o 00:02:34.773 CC test/nvme/simple_copy/simple_copy.o 00:02:34.773 CC test/nvme/reserve/reserve.o 00:02:34.773 CC test/nvme/connect_stress/connect_stress.o 00:02:34.773 CC test/nvme/fused_ordering/fused_ordering.o 00:02:34.773 CC test/nvme/err_injection/err_injection.o 00:02:34.773 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:34.773 CC test/nvme/compliance/nvme_compliance.o 00:02:34.773 CC test/nvme/cuse/cuse.o 00:02:34.773 CC test/nvme/sgl/sgl.o 00:02:34.773 CC test/nvme/reset/reset.o 00:02:34.773 CC test/nvme/boot_partition/boot_partition.o 00:02:34.773 CC test/blobfs/mkfs/mkfs.o 00:02:34.773 CC test/accel/dif/dif.o 00:02:34.773 CC test/lvol/esnap/esnap.o 00:02:35.031 LINK startup 00:02:35.031 LINK boot_partition 00:02:35.031 LINK err_injection 00:02:35.031 LINK fused_ordering 00:02:35.031 LINK connect_stress 00:02:35.031 LINK reserve 00:02:35.031 LINK doorbell_aers 00:02:35.031 LINK simple_copy 00:02:35.031 LINK nvme_dp 00:02:35.031 LINK aer 00:02:35.031 LINK sgl 00:02:35.031 LINK reset 00:02:35.031 LINK mkfs 00:02:35.031 LINK overhead 00:02:35.031 LINK fdp 00:02:35.031 CC examples/nvme/abort/abort.o 00:02:35.031 CC examples/nvme/hello_world/hello_world.o 00:02:35.031 CC examples/nvme/arbitration/arbitration.o 00:02:35.031 LINK nvme_compliance 00:02:35.031 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:35.031 CC examples/nvme/hotplug/hotplug.o 00:02:35.031 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:35.031 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:35.031 CC examples/nvme/reconnect/reconnect.o 00:02:35.032 CC examples/accel/perf/accel_perf.o 00:02:35.290 LINK iscsi_fuzz 00:02:35.290 CC examples/blob/cli/blobcli.o 00:02:35.290 CC examples/blob/hello_world/hello_blob.o 00:02:35.290 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:35.290 LINK cmb_copy 00:02:35.290 LINK pmr_persistence 00:02:35.290 LINK hello_world 00:02:35.290 LINK hotplug 00:02:35.290 LINK arbitration 00:02:35.290 LINK abort 00:02:35.290 LINK hello_blob 00:02:35.290 LINK dif 00:02:35.290 LINK reconnect 00:02:35.550 LINK nvme_manage 00:02:35.550 LINK hello_fsdev 00:02:35.550 LINK accel_perf 00:02:35.550 LINK blobcli 00:02:35.809 LINK cuse 00:02:35.809 CC test/bdev/bdevio/bdevio.o 00:02:36.068 CC examples/bdev/hello_world/hello_bdev.o 00:02:36.068 CC examples/bdev/bdevperf/bdevperf.o 00:02:36.068 LINK bdevio 00:02:36.068 LINK hello_bdev 00:02:36.636 LINK bdevperf 00:02:36.896 CC examples/nvmf/nvmf/nvmf.o 00:02:37.155 LINK nvmf 00:02:38.094 LINK esnap 00:02:38.353 00:02:38.354 real 0m50.432s 00:02:38.354 user 7m2.384s 00:02:38.354 sys 3m31.482s 00:02:38.354 20:59:25 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:38.354 20:59:25 make -- common/autotest_common.sh@10 -- $ set +x 00:02:38.354 ************************************ 00:02:38.354 END TEST make 00:02:38.354 ************************************ 00:02:38.354 20:59:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:38.354 20:59:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:38.354 20:59:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:38.354 20:59:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.354 20:59:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:38.354 20:59:25 -- pm/common@44 -- $ pid=3617695 00:02:38.354 20:59:25 -- pm/common@50 -- $ kill -TERM 3617695 00:02:38.354 20:59:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.354 20:59:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:38.354 20:59:25 -- pm/common@44 -- $ pid=3617696 00:02:38.354 20:59:25 -- pm/common@50 -- $ kill -TERM 3617696 00:02:38.354 20:59:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.354 20:59:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:38.354 20:59:25 -- pm/common@44 -- $ pid=3617697 00:02:38.354 20:59:25 -- pm/common@50 -- $ kill -TERM 3617697 00:02:38.354 20:59:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.354 20:59:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:38.354 20:59:25 -- pm/common@44 -- $ pid=3617722 00:02:38.354 20:59:25 -- pm/common@50 -- $ sudo -E kill -TERM 3617722 00:02:38.354 20:59:25 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:38.354 20:59:25 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:38.354 20:59:25 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:38.354 20:59:25 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:38.354 20:59:25 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:38.613 20:59:25 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:38.614 20:59:25 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:38.614 20:59:25 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:38.614 20:59:25 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:38.614 20:59:25 -- scripts/common.sh@336 -- # IFS=.-: 00:02:38.614 20:59:25 -- scripts/common.sh@336 -- # read -ra ver1 00:02:38.614 20:59:25 -- scripts/common.sh@337 -- # IFS=.-: 00:02:38.614 20:59:25 -- scripts/common.sh@337 -- # read -ra ver2 00:02:38.614 20:59:25 -- scripts/common.sh@338 -- # local 'op=<' 00:02:38.614 20:59:25 -- scripts/common.sh@340 -- # ver1_l=2 00:02:38.614 20:59:25 -- scripts/common.sh@341 -- # ver2_l=1 00:02:38.614 20:59:25 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:38.614 20:59:25 -- scripts/common.sh@344 -- # case "$op" in 00:02:38.614 20:59:25 -- scripts/common.sh@345 -- # : 1 00:02:38.614 20:59:25 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:38.614 20:59:25 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:38.614 20:59:25 -- scripts/common.sh@365 -- # decimal 1 00:02:38.614 20:59:25 -- scripts/common.sh@353 -- # local d=1 00:02:38.614 20:59:25 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:38.614 20:59:25 -- scripts/common.sh@355 -- # echo 1 00:02:38.614 20:59:25 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:38.614 20:59:25 -- scripts/common.sh@366 -- # decimal 2 00:02:38.614 20:59:25 -- scripts/common.sh@353 -- # local d=2 00:02:38.614 20:59:25 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:38.614 20:59:25 -- scripts/common.sh@355 -- # echo 2 00:02:38.614 20:59:25 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:38.614 20:59:25 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:38.614 20:59:25 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:38.614 20:59:25 -- scripts/common.sh@368 -- # return 0 00:02:38.614 20:59:25 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:38.614 20:59:25 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.614 --rc genhtml_branch_coverage=1 00:02:38.614 --rc genhtml_function_coverage=1 00:02:38.614 --rc genhtml_legend=1 00:02:38.614 --rc geninfo_all_blocks=1 00:02:38.614 --rc geninfo_unexecuted_blocks=1 00:02:38.614 00:02:38.614 ' 00:02:38.614 20:59:25 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.614 --rc genhtml_branch_coverage=1 00:02:38.614 --rc genhtml_function_coverage=1 00:02:38.614 --rc genhtml_legend=1 00:02:38.614 --rc geninfo_all_blocks=1 00:02:38.614 --rc geninfo_unexecuted_blocks=1 00:02:38.614 00:02:38.614 ' 00:02:38.614 20:59:25 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.614 --rc genhtml_branch_coverage=1 00:02:38.614 --rc genhtml_function_coverage=1 00:02:38.614 --rc genhtml_legend=1 00:02:38.614 --rc geninfo_all_blocks=1 00:02:38.614 --rc geninfo_unexecuted_blocks=1 00:02:38.614 00:02:38.614 ' 00:02:38.614 20:59:25 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:38.614 --rc genhtml_branch_coverage=1 00:02:38.614 --rc genhtml_function_coverage=1 00:02:38.614 --rc genhtml_legend=1 00:02:38.614 --rc geninfo_all_blocks=1 00:02:38.614 --rc geninfo_unexecuted_blocks=1 00:02:38.614 00:02:38.614 ' 00:02:38.614 20:59:25 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:38.614 20:59:25 -- nvmf/common.sh@7 -- # uname -s 00:02:38.614 20:59:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:38.614 20:59:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:38.614 20:59:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:38.614 20:59:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:38.614 20:59:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:38.614 20:59:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:38.614 20:59:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:38.614 20:59:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:38.614 20:59:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:38.614 20:59:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:38.614 20:59:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:02:38.614 20:59:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:02:38.614 20:59:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:38.614 20:59:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:38.614 20:59:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:38.614 20:59:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:38.614 20:59:25 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:38.614 20:59:25 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:38.614 20:59:25 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:38.614 20:59:25 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:38.614 20:59:25 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:38.614 20:59:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.614 20:59:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.614 20:59:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.614 20:59:25 -- paths/export.sh@5 -- # export PATH 00:02:38.614 20:59:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.614 20:59:25 -- nvmf/common.sh@51 -- # : 0 00:02:38.614 20:59:25 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:38.614 20:59:25 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:38.614 20:59:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:38.614 20:59:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:38.614 20:59:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:38.614 20:59:25 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:38.614 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:38.614 20:59:25 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:38.614 20:59:25 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:38.614 20:59:25 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:38.614 20:59:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:38.614 20:59:25 -- spdk/autotest.sh@32 -- # uname -s 00:02:38.614 20:59:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:38.614 20:59:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:38.614 20:59:25 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:38.614 20:59:25 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:38.614 20:59:25 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:38.614 20:59:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:38.614 20:59:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:38.614 20:59:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:38.614 20:59:25 -- spdk/autotest.sh@48 -- # udevadm_pid=3679819 00:02:38.614 20:59:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:38.614 20:59:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:38.614 20:59:25 -- pm/common@17 -- # local monitor 00:02:38.614 20:59:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.614 20:59:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.614 20:59:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.614 20:59:25 -- pm/common@21 -- # date +%s 00:02:38.614 20:59:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:38.614 20:59:25 -- pm/common@21 -- # date +%s 00:02:38.614 20:59:25 -- pm/common@25 -- # sleep 1 00:02:38.614 20:59:25 -- pm/common@21 -- # date +%s 00:02:38.614 20:59:25 -- pm/common@21 -- # date +%s 00:02:38.614 20:59:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732651165 00:02:38.614 20:59:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732651165 00:02:38.614 20:59:25 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732651165 00:02:38.614 20:59:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732651165 00:02:38.614 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732651165_collect-cpu-load.pm.log 00:02:38.614 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732651165_collect-vmstat.pm.log 00:02:38.614 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732651165_collect-cpu-temp.pm.log 00:02:38.614 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732651165_collect-bmc-pm.bmc.pm.log 00:02:39.552 20:59:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:39.552 20:59:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:39.552 20:59:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:39.552 20:59:26 -- common/autotest_common.sh@10 -- # set +x 00:02:39.552 20:59:26 -- spdk/autotest.sh@59 -- # create_test_list 00:02:39.552 20:59:26 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:39.552 20:59:26 -- common/autotest_common.sh@10 -- # set +x 00:02:39.552 20:59:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:39.552 20:59:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:39.552 20:59:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:39.552 20:59:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:39.552 20:59:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:39.552 20:59:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:39.552 20:59:26 -- common/autotest_common.sh@1457 -- # uname 00:02:39.552 20:59:26 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:39.552 20:59:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:39.552 20:59:26 -- common/autotest_common.sh@1477 -- # uname 00:02:39.552 20:59:26 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:39.552 20:59:26 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:39.552 20:59:26 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:39.812 lcov: LCOV version 1.15 00:02:39.812 20:59:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:58.030 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:58.030 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:02.224 20:59:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:02.224 20:59:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:02.224 20:59:49 -- common/autotest_common.sh@10 -- # set +x 00:03:02.224 20:59:49 -- spdk/autotest.sh@78 -- # rm -f 00:03:02.224 20:59:49 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.764 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:04.764 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:05.024 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:05.024 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:05.024 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:06.409 20:59:53 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:06.409 20:59:53 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:06.409 20:59:53 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:06.409 20:59:53 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:06.409 20:59:53 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:06.409 20:59:53 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:06.409 20:59:53 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:06.409 20:59:53 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:06.409 20:59:53 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:06.409 20:59:53 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:06.409 20:59:53 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:06.409 20:59:53 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:06.409 20:59:53 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:06.409 20:59:53 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:06.409 20:59:53 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:06.409 No valid GPT data, bailing 00:03:06.409 20:59:53 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:06.409 20:59:53 -- scripts/common.sh@394 -- # pt= 00:03:06.409 20:59:53 -- scripts/common.sh@395 -- # return 1 00:03:06.409 20:59:53 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:06.409 1+0 records in 00:03:06.409 1+0 records out 00:03:06.409 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00605737 s, 173 MB/s 00:03:06.409 20:59:53 -- spdk/autotest.sh@105 -- # sync 00:03:06.409 20:59:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:06.409 20:59:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:06.409 20:59:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:12.983 20:59:59 -- spdk/autotest.sh@111 -- # uname -s 00:03:12.983 20:59:59 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:12.983 20:59:59 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:12.983 20:59:59 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:15.518 Hugepages 00:03:15.518 node hugesize free / total 00:03:15.518 node0 1048576kB 0 / 0 00:03:15.518 node0 2048kB 0 / 0 00:03:15.518 node1 1048576kB 0 / 0 00:03:15.518 node1 2048kB 0 / 0 00:03:15.518 00:03:15.518 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:15.518 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:15.518 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:15.518 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:15.518 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:15.518 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:15.518 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:15.518 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:15.518 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:15.518 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:15.518 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:15.518 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:15.518 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:15.518 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:15.518 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:15.518 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:15.518 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:15.518 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:15.518 21:00:02 -- spdk/autotest.sh@117 -- # uname -s 00:03:15.518 21:00:02 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:15.518 21:00:02 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:15.518 21:00:02 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:17.428 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:17.428 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:17.428 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:17.428 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:17.428 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:17.428 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:17.428 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:17.686 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:17.686 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:17.686 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:17.686 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:17.686 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:17.686 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:17.686 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:17.686 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:17.686 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:20.977 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:22.355 21:00:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:23.291 21:00:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:23.291 21:00:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:23.291 21:00:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:23.291 21:00:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:23.291 21:00:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:23.291 21:00:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:23.291 21:00:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:23.291 21:00:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:23.291 21:00:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:23.550 21:00:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:23.550 21:00:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:23.550 21:00:10 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:26.086 Waiting for block devices as requested 00:03:26.086 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:26.086 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:26.086 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:26.345 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:26.345 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:26.345 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:26.345 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:26.605 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:26.605 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:26.605 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:26.605 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:26.864 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:26.864 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:26.864 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:27.123 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:27.123 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:27.123 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:03:29.044 21:00:15 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:29.044 21:00:15 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:03:29.044 21:00:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:29.044 21:00:15 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:03:29.044 21:00:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:03:29.044 21:00:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:03:29.044 21:00:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:03:29.044 21:00:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:29.044 21:00:15 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:29.044 21:00:15 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:29.044 21:00:15 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:29.044 21:00:15 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:29.044 21:00:15 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:29.044 21:00:15 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:29.044 21:00:15 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:29.044 21:00:15 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:29.044 21:00:16 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:29.044 21:00:16 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:29.044 21:00:16 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:29.044 21:00:16 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:29.044 21:00:16 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:29.044 21:00:16 -- common/autotest_common.sh@1543 -- # continue 00:03:29.044 21:00:16 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:29.044 21:00:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:29.044 21:00:16 -- common/autotest_common.sh@10 -- # set +x 00:03:29.044 21:00:16 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:29.044 21:00:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:29.044 21:00:16 -- common/autotest_common.sh@10 -- # set +x 00:03:29.044 21:00:16 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:31.580 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:31.580 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:34.882 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.259 21:00:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:36.259 21:00:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:36.259 21:00:23 -- common/autotest_common.sh@10 -- # set +x 00:03:36.259 21:00:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:36.259 21:00:23 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:36.259 21:00:23 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:36.259 21:00:23 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:36.259 21:00:23 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:36.259 21:00:23 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:36.259 21:00:23 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:36.259 21:00:23 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:36.259 21:00:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:36.259 21:00:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:36.259 21:00:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:36.259 21:00:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:36.259 21:00:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:36.259 21:00:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:36.259 21:00:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:36.259 21:00:23 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:36.259 21:00:23 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:03:36.259 21:00:23 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:36.259 21:00:23 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:36.259 21:00:23 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:36.259 21:00:23 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:36.259 21:00:23 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:03:36.259 21:00:23 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:03:36.259 21:00:23 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3696547 00:03:36.259 21:00:23 -- common/autotest_common.sh@1585 -- # waitforlisten 3696547 00:03:36.259 21:00:23 -- common/autotest_common.sh@835 -- # '[' -z 3696547 ']' 00:03:36.259 21:00:23 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:36.259 21:00:23 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:36.259 21:00:23 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:36.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:36.259 21:00:23 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:36.259 21:00:23 -- common/autotest_common.sh@10 -- # set +x 00:03:36.259 21:00:23 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:36.259 [2024-11-26 21:00:23.495128] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:03:36.259 [2024-11-26 21:00:23.495170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3696547 ] 00:03:36.259 [2024-11-26 21:00:23.552153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:36.259 [2024-11-26 21:00:23.591568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:36.518 21:00:23 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:36.518 21:00:23 -- common/autotest_common.sh@868 -- # return 0 00:03:36.518 21:00:23 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:36.518 21:00:23 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:36.518 21:00:23 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:03:39.825 nvme0n1 00:03:39.825 21:00:26 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:39.825 [2024-11-26 21:00:26.954191] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:39.825 request: 00:03:39.825 { 00:03:39.825 "nvme_ctrlr_name": "nvme0", 00:03:39.825 "password": "test", 00:03:39.825 "method": "bdev_nvme_opal_revert", 00:03:39.825 "req_id": 1 00:03:39.825 } 00:03:39.825 Got JSON-RPC error response 00:03:39.825 response: 00:03:39.825 { 00:03:39.825 "code": -32602, 00:03:39.825 "message": "Invalid parameters" 00:03:39.825 } 00:03:39.825 21:00:26 -- common/autotest_common.sh@1591 -- # true 00:03:39.825 21:00:26 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:39.825 21:00:26 -- common/autotest_common.sh@1595 -- # killprocess 3696547 00:03:39.825 21:00:26 -- common/autotest_common.sh@954 -- # '[' -z 3696547 ']' 00:03:39.825 21:00:26 -- common/autotest_common.sh@958 -- # kill -0 3696547 00:03:39.825 21:00:26 -- common/autotest_common.sh@959 -- # uname 00:03:39.825 21:00:26 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:39.825 21:00:26 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3696547 00:03:39.825 21:00:27 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:39.825 21:00:27 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:39.825 21:00:27 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3696547' 00:03:39.825 killing process with pid 3696547 00:03:39.825 21:00:27 -- common/autotest_common.sh@973 -- # kill 3696547 00:03:39.825 21:00:27 -- common/autotest_common.sh@978 -- # wait 3696547 00:03:44.014 21:00:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:44.014 21:00:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:44.014 21:00:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:44.014 21:00:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:44.014 21:00:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:44.014 21:00:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.014 21:00:30 -- common/autotest_common.sh@10 -- # set +x 00:03:44.014 21:00:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:44.014 21:00:30 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:44.014 21:00:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.014 21:00:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.014 21:00:30 -- common/autotest_common.sh@10 -- # set +x 00:03:44.014 ************************************ 00:03:44.014 START TEST env 00:03:44.014 ************************************ 00:03:44.014 21:00:30 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:44.014 * Looking for test storage... 00:03:44.014 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:03:44.014 21:00:31 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:44.014 21:00:31 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:44.014 21:00:31 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:44.014 21:00:31 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:44.014 21:00:31 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.014 21:00:31 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.014 21:00:31 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.014 21:00:31 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.014 21:00:31 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.014 21:00:31 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.014 21:00:31 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.014 21:00:31 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.014 21:00:31 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.014 21:00:31 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.014 21:00:31 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.014 21:00:31 env -- scripts/common.sh@344 -- # case "$op" in 00:03:44.014 21:00:31 env -- scripts/common.sh@345 -- # : 1 00:03:44.014 21:00:31 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.014 21:00:31 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.014 21:00:31 env -- scripts/common.sh@365 -- # decimal 1 00:03:44.014 21:00:31 env -- scripts/common.sh@353 -- # local d=1 00:03:44.014 21:00:31 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.014 21:00:31 env -- scripts/common.sh@355 -- # echo 1 00:03:44.014 21:00:31 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.014 21:00:31 env -- scripts/common.sh@366 -- # decimal 2 00:03:44.014 21:00:31 env -- scripts/common.sh@353 -- # local d=2 00:03:44.014 21:00:31 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.014 21:00:31 env -- scripts/common.sh@355 -- # echo 2 00:03:44.014 21:00:31 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.014 21:00:31 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.014 21:00:31 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.014 21:00:31 env -- scripts/common.sh@368 -- # return 0 00:03:44.014 21:00:31 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.014 21:00:31 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:44.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.014 --rc genhtml_branch_coverage=1 00:03:44.014 --rc genhtml_function_coverage=1 00:03:44.014 --rc genhtml_legend=1 00:03:44.014 --rc geninfo_all_blocks=1 00:03:44.014 --rc geninfo_unexecuted_blocks=1 00:03:44.014 00:03:44.014 ' 00:03:44.014 21:00:31 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:44.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.014 --rc genhtml_branch_coverage=1 00:03:44.014 --rc genhtml_function_coverage=1 00:03:44.014 --rc genhtml_legend=1 00:03:44.014 --rc geninfo_all_blocks=1 00:03:44.014 --rc geninfo_unexecuted_blocks=1 00:03:44.014 00:03:44.014 ' 00:03:44.014 21:00:31 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:44.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.014 --rc genhtml_branch_coverage=1 00:03:44.014 --rc genhtml_function_coverage=1 00:03:44.014 --rc genhtml_legend=1 00:03:44.014 --rc geninfo_all_blocks=1 00:03:44.014 --rc geninfo_unexecuted_blocks=1 00:03:44.014 00:03:44.014 ' 00:03:44.014 21:00:31 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:44.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.014 --rc genhtml_branch_coverage=1 00:03:44.014 --rc genhtml_function_coverage=1 00:03:44.014 --rc genhtml_legend=1 00:03:44.014 --rc geninfo_all_blocks=1 00:03:44.014 --rc geninfo_unexecuted_blocks=1 00:03:44.014 00:03:44.014 ' 00:03:44.014 21:00:31 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:44.014 21:00:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.014 21:00:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.014 21:00:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.014 ************************************ 00:03:44.014 START TEST env_memory 00:03:44.014 ************************************ 00:03:44.014 21:00:31 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:44.014 00:03:44.014 00:03:44.014 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.014 http://cunit.sourceforge.net/ 00:03:44.014 00:03:44.014 00:03:44.014 Suite: memory 00:03:44.014 Test: alloc and free memory map ...[2024-11-26 21:00:31.214266] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:44.014 passed 00:03:44.014 Test: mem map translation ...[2024-11-26 21:00:31.231157] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:44.014 [2024-11-26 21:00:31.231178] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:44.014 [2024-11-26 21:00:31.231210] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:44.014 [2024-11-26 21:00:31.231216] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:44.014 passed 00:03:44.014 Test: mem map registration ...[2024-11-26 21:00:31.264363] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:44.014 [2024-11-26 21:00:31.264376] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:44.014 passed 00:03:44.014 Test: mem map adjacent registrations ...passed 00:03:44.014 00:03:44.014 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.015 suites 1 1 n/a 0 0 00:03:44.015 tests 4 4 4 0 0 00:03:44.015 asserts 152 152 152 0 n/a 00:03:44.015 00:03:44.015 Elapsed time = 0.124 seconds 00:03:44.015 00:03:44.015 real 0m0.136s 00:03:44.015 user 0m0.128s 00:03:44.015 sys 0m0.008s 00:03:44.015 21:00:31 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.015 21:00:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:44.015 ************************************ 00:03:44.015 END TEST env_memory 00:03:44.015 ************************************ 00:03:44.015 21:00:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:44.015 21:00:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.015 21:00:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.015 21:00:31 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.015 ************************************ 00:03:44.015 START TEST env_vtophys 00:03:44.015 ************************************ 00:03:44.015 21:00:31 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:44.015 EAL: lib.eal log level changed from notice to debug 00:03:44.015 EAL: Detected lcore 0 as core 0 on socket 0 00:03:44.015 EAL: Detected lcore 1 as core 1 on socket 0 00:03:44.015 EAL: Detected lcore 2 as core 2 on socket 0 00:03:44.015 EAL: Detected lcore 3 as core 3 on socket 0 00:03:44.015 EAL: Detected lcore 4 as core 4 on socket 0 00:03:44.015 EAL: Detected lcore 5 as core 5 on socket 0 00:03:44.015 EAL: Detected lcore 6 as core 6 on socket 0 00:03:44.015 EAL: Detected lcore 7 as core 8 on socket 0 00:03:44.015 EAL: Detected lcore 8 as core 9 on socket 0 00:03:44.015 EAL: Detected lcore 9 as core 10 on socket 0 00:03:44.015 EAL: Detected lcore 10 as core 11 on socket 0 00:03:44.015 EAL: Detected lcore 11 as core 12 on socket 0 00:03:44.015 EAL: Detected lcore 12 as core 13 on socket 0 00:03:44.015 EAL: Detected lcore 13 as core 14 on socket 0 00:03:44.015 EAL: Detected lcore 14 as core 16 on socket 0 00:03:44.015 EAL: Detected lcore 15 as core 17 on socket 0 00:03:44.015 EAL: Detected lcore 16 as core 18 on socket 0 00:03:44.015 EAL: Detected lcore 17 as core 19 on socket 0 00:03:44.015 EAL: Detected lcore 18 as core 20 on socket 0 00:03:44.015 EAL: Detected lcore 19 as core 21 on socket 0 00:03:44.015 EAL: Detected lcore 20 as core 22 on socket 0 00:03:44.015 EAL: Detected lcore 21 as core 24 on socket 0 00:03:44.015 EAL: Detected lcore 22 as core 25 on socket 0 00:03:44.015 EAL: Detected lcore 23 as core 26 on socket 0 00:03:44.015 EAL: Detected lcore 24 as core 27 on socket 0 00:03:44.015 EAL: Detected lcore 25 as core 28 on socket 0 00:03:44.015 EAL: Detected lcore 26 as core 29 on socket 0 00:03:44.015 EAL: Detected lcore 27 as core 30 on socket 0 00:03:44.015 EAL: Detected lcore 28 as core 0 on socket 1 00:03:44.015 EAL: Detected lcore 29 as core 1 on socket 1 00:03:44.015 EAL: Detected lcore 30 as core 2 on socket 1 00:03:44.015 EAL: Detected lcore 31 as core 3 on socket 1 00:03:44.015 EAL: Detected lcore 32 as core 4 on socket 1 00:03:44.015 EAL: Detected lcore 33 as core 5 on socket 1 00:03:44.015 EAL: Detected lcore 34 as core 6 on socket 1 00:03:44.015 EAL: Detected lcore 35 as core 8 on socket 1 00:03:44.015 EAL: Detected lcore 36 as core 9 on socket 1 00:03:44.015 EAL: Detected lcore 37 as core 10 on socket 1 00:03:44.015 EAL: Detected lcore 38 as core 11 on socket 1 00:03:44.015 EAL: Detected lcore 39 as core 12 on socket 1 00:03:44.015 EAL: Detected lcore 40 as core 13 on socket 1 00:03:44.015 EAL: Detected lcore 41 as core 14 on socket 1 00:03:44.015 EAL: Detected lcore 42 as core 16 on socket 1 00:03:44.015 EAL: Detected lcore 43 as core 17 on socket 1 00:03:44.015 EAL: Detected lcore 44 as core 18 on socket 1 00:03:44.015 EAL: Detected lcore 45 as core 19 on socket 1 00:03:44.015 EAL: Detected lcore 46 as core 20 on socket 1 00:03:44.015 EAL: Detected lcore 47 as core 21 on socket 1 00:03:44.015 EAL: Detected lcore 48 as core 22 on socket 1 00:03:44.015 EAL: Detected lcore 49 as core 24 on socket 1 00:03:44.015 EAL: Detected lcore 50 as core 25 on socket 1 00:03:44.015 EAL: Detected lcore 51 as core 26 on socket 1 00:03:44.015 EAL: Detected lcore 52 as core 27 on socket 1 00:03:44.015 EAL: Detected lcore 53 as core 28 on socket 1 00:03:44.015 EAL: Detected lcore 54 as core 29 on socket 1 00:03:44.015 EAL: Detected lcore 55 as core 30 on socket 1 00:03:44.015 EAL: Detected lcore 56 as core 0 on socket 0 00:03:44.015 EAL: Detected lcore 57 as core 1 on socket 0 00:03:44.015 EAL: Detected lcore 58 as core 2 on socket 0 00:03:44.015 EAL: Detected lcore 59 as core 3 on socket 0 00:03:44.015 EAL: Detected lcore 60 as core 4 on socket 0 00:03:44.015 EAL: Detected lcore 61 as core 5 on socket 0 00:03:44.015 EAL: Detected lcore 62 as core 6 on socket 0 00:03:44.015 EAL: Detected lcore 63 as core 8 on socket 0 00:03:44.015 EAL: Detected lcore 64 as core 9 on socket 0 00:03:44.015 EAL: Detected lcore 65 as core 10 on socket 0 00:03:44.015 EAL: Detected lcore 66 as core 11 on socket 0 00:03:44.015 EAL: Detected lcore 67 as core 12 on socket 0 00:03:44.015 EAL: Detected lcore 68 as core 13 on socket 0 00:03:44.015 EAL: Detected lcore 69 as core 14 on socket 0 00:03:44.015 EAL: Detected lcore 70 as core 16 on socket 0 00:03:44.015 EAL: Detected lcore 71 as core 17 on socket 0 00:03:44.015 EAL: Detected lcore 72 as core 18 on socket 0 00:03:44.015 EAL: Detected lcore 73 as core 19 on socket 0 00:03:44.015 EAL: Detected lcore 74 as core 20 on socket 0 00:03:44.015 EAL: Detected lcore 75 as core 21 on socket 0 00:03:44.015 EAL: Detected lcore 76 as core 22 on socket 0 00:03:44.015 EAL: Detected lcore 77 as core 24 on socket 0 00:03:44.015 EAL: Detected lcore 78 as core 25 on socket 0 00:03:44.015 EAL: Detected lcore 79 as core 26 on socket 0 00:03:44.015 EAL: Detected lcore 80 as core 27 on socket 0 00:03:44.015 EAL: Detected lcore 81 as core 28 on socket 0 00:03:44.015 EAL: Detected lcore 82 as core 29 on socket 0 00:03:44.015 EAL: Detected lcore 83 as core 30 on socket 0 00:03:44.015 EAL: Detected lcore 84 as core 0 on socket 1 00:03:44.015 EAL: Detected lcore 85 as core 1 on socket 1 00:03:44.015 EAL: Detected lcore 86 as core 2 on socket 1 00:03:44.015 EAL: Detected lcore 87 as core 3 on socket 1 00:03:44.015 EAL: Detected lcore 88 as core 4 on socket 1 00:03:44.015 EAL: Detected lcore 89 as core 5 on socket 1 00:03:44.015 EAL: Detected lcore 90 as core 6 on socket 1 00:03:44.015 EAL: Detected lcore 91 as core 8 on socket 1 00:03:44.015 EAL: Detected lcore 92 as core 9 on socket 1 00:03:44.015 EAL: Detected lcore 93 as core 10 on socket 1 00:03:44.015 EAL: Detected lcore 94 as core 11 on socket 1 00:03:44.015 EAL: Detected lcore 95 as core 12 on socket 1 00:03:44.015 EAL: Detected lcore 96 as core 13 on socket 1 00:03:44.015 EAL: Detected lcore 97 as core 14 on socket 1 00:03:44.015 EAL: Detected lcore 98 as core 16 on socket 1 00:03:44.015 EAL: Detected lcore 99 as core 17 on socket 1 00:03:44.015 EAL: Detected lcore 100 as core 18 on socket 1 00:03:44.015 EAL: Detected lcore 101 as core 19 on socket 1 00:03:44.015 EAL: Detected lcore 102 as core 20 on socket 1 00:03:44.015 EAL: Detected lcore 103 as core 21 on socket 1 00:03:44.015 EAL: Detected lcore 104 as core 22 on socket 1 00:03:44.015 EAL: Detected lcore 105 as core 24 on socket 1 00:03:44.015 EAL: Detected lcore 106 as core 25 on socket 1 00:03:44.015 EAL: Detected lcore 107 as core 26 on socket 1 00:03:44.015 EAL: Detected lcore 108 as core 27 on socket 1 00:03:44.015 EAL: Detected lcore 109 as core 28 on socket 1 00:03:44.015 EAL: Detected lcore 110 as core 29 on socket 1 00:03:44.015 EAL: Detected lcore 111 as core 30 on socket 1 00:03:44.015 EAL: Maximum logical cores by configuration: 128 00:03:44.015 EAL: Detected CPU lcores: 112 00:03:44.015 EAL: Detected NUMA nodes: 2 00:03:44.015 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:44.015 EAL: Detected shared linkage of DPDK 00:03:44.015 EAL: No shared files mode enabled, IPC will be disabled 00:03:44.015 EAL: Bus pci wants IOVA as 'DC' 00:03:44.015 EAL: Buses did not request a specific IOVA mode. 00:03:44.015 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:44.015 EAL: Selected IOVA mode 'VA' 00:03:44.015 EAL: Probing VFIO support... 00:03:44.015 EAL: IOMMU type 1 (Type 1) is supported 00:03:44.015 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:44.015 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:44.015 EAL: VFIO support initialized 00:03:44.015 EAL: Ask a virtual area of 0x2e000 bytes 00:03:44.015 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:44.015 EAL: Setting up physically contiguous memory... 00:03:44.015 EAL: Setting maximum number of open files to 524288 00:03:44.015 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:44.015 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:44.015 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:44.015 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.015 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:44.015 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.015 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.015 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:44.015 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:44.015 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.015 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:44.015 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.015 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.015 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:44.015 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:44.015 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.015 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:44.015 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.015 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.015 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:44.015 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:44.015 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.015 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:44.015 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.015 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.015 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:44.015 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:44.015 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:44.015 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.015 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:44.016 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.016 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.016 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:44.016 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:44.016 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.016 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:44.016 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.016 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.016 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:44.016 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:44.016 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.016 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:44.016 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.016 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.016 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:44.016 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:44.016 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.016 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:44.016 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.016 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.016 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:44.016 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:44.016 EAL: Hugepages will be freed exactly as allocated. 00:03:44.016 EAL: No shared files mode enabled, IPC is disabled 00:03:44.016 EAL: No shared files mode enabled, IPC is disabled 00:03:44.016 EAL: TSC frequency is ~2700000 KHz 00:03:44.016 EAL: Main lcore 0 is ready (tid=7f58bafb8a00;cpuset=[0]) 00:03:44.016 EAL: Trying to obtain current memory policy. 00:03:44.016 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.016 EAL: Restoring previous memory policy: 0 00:03:44.016 EAL: request: mp_malloc_sync 00:03:44.016 EAL: No shared files mode enabled, IPC is disabled 00:03:44.016 EAL: Heap on socket 0 was expanded by 2MB 00:03:44.016 EAL: No shared files mode enabled, IPC is disabled 00:03:44.016 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:44.275 EAL: Mem event callback 'spdk:(nil)' registered 00:03:44.275 00:03:44.275 00:03:44.275 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.275 http://cunit.sourceforge.net/ 00:03:44.275 00:03:44.275 00:03:44.275 Suite: components_suite 00:03:44.275 Test: vtophys_malloc_test ...passed 00:03:44.275 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:44.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.275 EAL: Restoring previous memory policy: 4 00:03:44.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.275 EAL: request: mp_malloc_sync 00:03:44.275 EAL: No shared files mode enabled, IPC is disabled 00:03:44.275 EAL: Heap on socket 0 was expanded by 4MB 00:03:44.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.275 EAL: request: mp_malloc_sync 00:03:44.275 EAL: No shared files mode enabled, IPC is disabled 00:03:44.275 EAL: Heap on socket 0 was shrunk by 4MB 00:03:44.275 EAL: Trying to obtain current memory policy. 00:03:44.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.275 EAL: Restoring previous memory policy: 4 00:03:44.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.275 EAL: request: mp_malloc_sync 00:03:44.275 EAL: No shared files mode enabled, IPC is disabled 00:03:44.275 EAL: Heap on socket 0 was expanded by 6MB 00:03:44.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.275 EAL: request: mp_malloc_sync 00:03:44.275 EAL: No shared files mode enabled, IPC is disabled 00:03:44.275 EAL: Heap on socket 0 was shrunk by 6MB 00:03:44.275 EAL: Trying to obtain current memory policy. 00:03:44.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.275 EAL: Restoring previous memory policy: 4 00:03:44.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.275 EAL: request: mp_malloc_sync 00:03:44.275 EAL: No shared files mode enabled, IPC is disabled 00:03:44.275 EAL: Heap on socket 0 was expanded by 10MB 00:03:44.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.275 EAL: request: mp_malloc_sync 00:03:44.275 EAL: No shared files mode enabled, IPC is disabled 00:03:44.275 EAL: Heap on socket 0 was shrunk by 10MB 00:03:44.275 EAL: Trying to obtain current memory policy. 00:03:44.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.275 EAL: Restoring previous memory policy: 4 00:03:44.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.275 EAL: request: mp_malloc_sync 00:03:44.275 EAL: No shared files mode enabled, IPC is disabled 00:03:44.275 EAL: Heap on socket 0 was expanded by 18MB 00:03:44.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.275 EAL: request: mp_malloc_sync 00:03:44.275 EAL: No shared files mode enabled, IPC is disabled 00:03:44.275 EAL: Heap on socket 0 was shrunk by 18MB 00:03:44.275 EAL: Trying to obtain current memory policy. 00:03:44.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.275 EAL: Restoring previous memory policy: 4 00:03:44.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.275 EAL: request: mp_malloc_sync 00:03:44.275 EAL: No shared files mode enabled, IPC is disabled 00:03:44.275 EAL: Heap on socket 0 was expanded by 34MB 00:03:44.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.275 EAL: request: mp_malloc_sync 00:03:44.275 EAL: No shared files mode enabled, IPC is disabled 00:03:44.275 EAL: Heap on socket 0 was shrunk by 34MB 00:03:44.275 EAL: Trying to obtain current memory policy. 00:03:44.275 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.275 EAL: Restoring previous memory policy: 4 00:03:44.275 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.275 EAL: request: mp_malloc_sync 00:03:44.275 EAL: No shared files mode enabled, IPC is disabled 00:03:44.275 EAL: Heap on socket 0 was expanded by 66MB 00:03:44.276 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.276 EAL: request: mp_malloc_sync 00:03:44.276 EAL: No shared files mode enabled, IPC is disabled 00:03:44.276 EAL: Heap on socket 0 was shrunk by 66MB 00:03:44.276 EAL: Trying to obtain current memory policy. 00:03:44.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.276 EAL: Restoring previous memory policy: 4 00:03:44.276 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.276 EAL: request: mp_malloc_sync 00:03:44.276 EAL: No shared files mode enabled, IPC is disabled 00:03:44.276 EAL: Heap on socket 0 was expanded by 130MB 00:03:44.276 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.276 EAL: request: mp_malloc_sync 00:03:44.276 EAL: No shared files mode enabled, IPC is disabled 00:03:44.276 EAL: Heap on socket 0 was shrunk by 130MB 00:03:44.276 EAL: Trying to obtain current memory policy. 00:03:44.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.276 EAL: Restoring previous memory policy: 4 00:03:44.276 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.276 EAL: request: mp_malloc_sync 00:03:44.276 EAL: No shared files mode enabled, IPC is disabled 00:03:44.276 EAL: Heap on socket 0 was expanded by 258MB 00:03:44.276 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.276 EAL: request: mp_malloc_sync 00:03:44.276 EAL: No shared files mode enabled, IPC is disabled 00:03:44.276 EAL: Heap on socket 0 was shrunk by 258MB 00:03:44.276 EAL: Trying to obtain current memory policy. 00:03:44.276 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.535 EAL: Restoring previous memory policy: 4 00:03:44.535 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.535 EAL: request: mp_malloc_sync 00:03:44.535 EAL: No shared files mode enabled, IPC is disabled 00:03:44.535 EAL: Heap on socket 0 was expanded by 514MB 00:03:44.535 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.535 EAL: request: mp_malloc_sync 00:03:44.535 EAL: No shared files mode enabled, IPC is disabled 00:03:44.535 EAL: Heap on socket 0 was shrunk by 514MB 00:03:44.535 EAL: Trying to obtain current memory policy. 00:03:44.535 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.794 EAL: Restoring previous memory policy: 4 00:03:44.794 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.794 EAL: request: mp_malloc_sync 00:03:44.794 EAL: No shared files mode enabled, IPC is disabled 00:03:44.794 EAL: Heap on socket 0 was expanded by 1026MB 00:03:45.053 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.053 EAL: request: mp_malloc_sync 00:03:45.053 EAL: No shared files mode enabled, IPC is disabled 00:03:45.053 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:45.053 passed 00:03:45.053 00:03:45.053 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.053 suites 1 1 n/a 0 0 00:03:45.053 tests 2 2 2 0 0 00:03:45.053 asserts 497 497 497 0 n/a 00:03:45.053 00:03:45.053 Elapsed time = 0.951 seconds 00:03:45.053 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.053 EAL: request: mp_malloc_sync 00:03:45.053 EAL: No shared files mode enabled, IPC is disabled 00:03:45.053 EAL: Heap on socket 0 was shrunk by 2MB 00:03:45.053 EAL: No shared files mode enabled, IPC is disabled 00:03:45.053 EAL: No shared files mode enabled, IPC is disabled 00:03:45.053 EAL: No shared files mode enabled, IPC is disabled 00:03:45.053 00:03:45.053 real 0m1.066s 00:03:45.053 user 0m0.634s 00:03:45.053 sys 0m0.405s 00:03:45.053 21:00:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.053 21:00:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:45.053 ************************************ 00:03:45.053 END TEST env_vtophys 00:03:45.053 ************************************ 00:03:45.053 21:00:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:45.053 21:00:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.053 21:00:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.053 21:00:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.312 ************************************ 00:03:45.312 START TEST env_pci 00:03:45.312 ************************************ 00:03:45.312 21:00:32 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:45.312 00:03:45.312 00:03:45.312 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.312 http://cunit.sourceforge.net/ 00:03:45.312 00:03:45.312 00:03:45.312 Suite: pci 00:03:45.312 Test: pci_hook ...[2024-11-26 21:00:32.530474] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3698403 has claimed it 00:03:45.312 EAL: Cannot find device (10000:00:01.0) 00:03:45.312 EAL: Failed to attach device on primary process 00:03:45.312 passed 00:03:45.312 00:03:45.312 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.312 suites 1 1 n/a 0 0 00:03:45.312 tests 1 1 1 0 0 00:03:45.312 asserts 25 25 25 0 n/a 00:03:45.312 00:03:45.312 Elapsed time = 0.029 seconds 00:03:45.312 00:03:45.312 real 0m0.049s 00:03:45.312 user 0m0.014s 00:03:45.312 sys 0m0.034s 00:03:45.312 21:00:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.312 21:00:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:45.312 ************************************ 00:03:45.312 END TEST env_pci 00:03:45.312 ************************************ 00:03:45.312 21:00:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:45.312 21:00:32 env -- env/env.sh@15 -- # uname 00:03:45.312 21:00:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:45.312 21:00:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:45.312 21:00:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.312 21:00:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:45.312 21:00:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.312 21:00:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.312 ************************************ 00:03:45.312 START TEST env_dpdk_post_init 00:03:45.312 ************************************ 00:03:45.312 21:00:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.312 EAL: Detected CPU lcores: 112 00:03:45.312 EAL: Detected NUMA nodes: 2 00:03:45.312 EAL: Detected shared linkage of DPDK 00:03:45.312 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.312 EAL: Selected IOVA mode 'VA' 00:03:45.312 EAL: VFIO support initialized 00:03:45.312 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.572 EAL: Using IOMMU type 1 (Type 1) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:45.572 EAL: Ignore mapping IO port bar(1) 00:03:45.572 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:46.509 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:03:51.777 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:03:51.777 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:03:52.035 Starting DPDK initialization... 00:03:52.035 Starting SPDK post initialization... 00:03:52.035 SPDK NVMe probe 00:03:52.035 Attaching to 0000:d8:00.0 00:03:52.035 Attached to 0000:d8:00.0 00:03:52.035 Cleaning up... 00:03:52.035 00:03:52.035 real 0m6.688s 00:03:52.035 user 0m5.152s 00:03:52.035 sys 0m0.599s 00:03:52.035 21:00:39 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.035 21:00:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:52.035 ************************************ 00:03:52.035 END TEST env_dpdk_post_init 00:03:52.035 ************************************ 00:03:52.035 21:00:39 env -- env/env.sh@26 -- # uname 00:03:52.035 21:00:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:52.035 21:00:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:52.035 21:00:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.035 21:00:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.035 21:00:39 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.035 ************************************ 00:03:52.035 START TEST env_mem_callbacks 00:03:52.035 ************************************ 00:03:52.035 21:00:39 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:52.035 EAL: Detected CPU lcores: 112 00:03:52.035 EAL: Detected NUMA nodes: 2 00:03:52.035 EAL: Detected shared linkage of DPDK 00:03:52.035 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:52.035 EAL: Selected IOVA mode 'VA' 00:03:52.035 EAL: VFIO support initialized 00:03:52.035 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:52.035 00:03:52.035 00:03:52.035 CUnit - A unit testing framework for C - Version 2.1-3 00:03:52.035 http://cunit.sourceforge.net/ 00:03:52.035 00:03:52.035 00:03:52.035 Suite: memory 00:03:52.035 Test: test ... 00:03:52.035 register 0x200000200000 2097152 00:03:52.035 malloc 3145728 00:03:52.035 register 0x200000400000 4194304 00:03:52.035 buf 0x200000500000 len 3145728 PASSED 00:03:52.035 malloc 64 00:03:52.035 buf 0x2000004fff40 len 64 PASSED 00:03:52.035 malloc 4194304 00:03:52.035 register 0x200000800000 6291456 00:03:52.035 buf 0x200000a00000 len 4194304 PASSED 00:03:52.035 free 0x200000500000 3145728 00:03:52.035 free 0x2000004fff40 64 00:03:52.035 unregister 0x200000400000 4194304 PASSED 00:03:52.035 free 0x200000a00000 4194304 00:03:52.035 unregister 0x200000800000 6291456 PASSED 00:03:52.035 malloc 8388608 00:03:52.035 register 0x200000400000 10485760 00:03:52.035 buf 0x200000600000 len 8388608 PASSED 00:03:52.035 free 0x200000600000 8388608 00:03:52.035 unregister 0x200000400000 10485760 PASSED 00:03:52.035 passed 00:03:52.035 00:03:52.035 Run Summary: Type Total Ran Passed Failed Inactive 00:03:52.035 suites 1 1 n/a 0 0 00:03:52.035 tests 1 1 1 0 0 00:03:52.035 asserts 15 15 15 0 n/a 00:03:52.035 00:03:52.035 Elapsed time = 0.005 seconds 00:03:52.035 00:03:52.035 real 0m0.056s 00:03:52.035 user 0m0.018s 00:03:52.035 sys 0m0.038s 00:03:52.035 21:00:39 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.035 21:00:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:52.035 ************************************ 00:03:52.035 END TEST env_mem_callbacks 00:03:52.035 ************************************ 00:03:52.294 00:03:52.294 real 0m8.512s 00:03:52.294 user 0m6.186s 00:03:52.294 sys 0m1.396s 00:03:52.294 21:00:39 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.294 21:00:39 env -- common/autotest_common.sh@10 -- # set +x 00:03:52.294 ************************************ 00:03:52.294 END TEST env 00:03:52.294 ************************************ 00:03:52.294 21:00:39 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:52.294 21:00:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.294 21:00:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.294 21:00:39 -- common/autotest_common.sh@10 -- # set +x 00:03:52.294 ************************************ 00:03:52.294 START TEST rpc 00:03:52.294 ************************************ 00:03:52.294 21:00:39 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:52.294 * Looking for test storage... 00:03:52.294 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:52.294 21:00:39 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:52.294 21:00:39 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:52.294 21:00:39 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:52.294 21:00:39 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:52.294 21:00:39 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.294 21:00:39 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.294 21:00:39 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.294 21:00:39 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.294 21:00:39 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.294 21:00:39 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.294 21:00:39 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.294 21:00:39 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.294 21:00:39 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.294 21:00:39 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.294 21:00:39 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.294 21:00:39 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:52.295 21:00:39 rpc -- scripts/common.sh@345 -- # : 1 00:03:52.295 21:00:39 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.295 21:00:39 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.295 21:00:39 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:52.295 21:00:39 rpc -- scripts/common.sh@353 -- # local d=1 00:03:52.295 21:00:39 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.295 21:00:39 rpc -- scripts/common.sh@355 -- # echo 1 00:03:52.295 21:00:39 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.295 21:00:39 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:52.295 21:00:39 rpc -- scripts/common.sh@353 -- # local d=2 00:03:52.295 21:00:39 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.295 21:00:39 rpc -- scripts/common.sh@355 -- # echo 2 00:03:52.295 21:00:39 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.295 21:00:39 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.295 21:00:39 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.295 21:00:39 rpc -- scripts/common.sh@368 -- # return 0 00:03:52.295 21:00:39 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.295 21:00:39 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:52.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.295 --rc genhtml_branch_coverage=1 00:03:52.295 --rc genhtml_function_coverage=1 00:03:52.295 --rc genhtml_legend=1 00:03:52.295 --rc geninfo_all_blocks=1 00:03:52.295 --rc geninfo_unexecuted_blocks=1 00:03:52.295 00:03:52.295 ' 00:03:52.295 21:00:39 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:52.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.295 --rc genhtml_branch_coverage=1 00:03:52.295 --rc genhtml_function_coverage=1 00:03:52.295 --rc genhtml_legend=1 00:03:52.295 --rc geninfo_all_blocks=1 00:03:52.295 --rc geninfo_unexecuted_blocks=1 00:03:52.295 00:03:52.295 ' 00:03:52.295 21:00:39 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:52.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.295 --rc genhtml_branch_coverage=1 00:03:52.295 --rc genhtml_function_coverage=1 00:03:52.295 --rc genhtml_legend=1 00:03:52.295 --rc geninfo_all_blocks=1 00:03:52.295 --rc geninfo_unexecuted_blocks=1 00:03:52.295 00:03:52.295 ' 00:03:52.295 21:00:39 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:52.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.295 --rc genhtml_branch_coverage=1 00:03:52.295 --rc genhtml_function_coverage=1 00:03:52.295 --rc genhtml_legend=1 00:03:52.295 --rc geninfo_all_blocks=1 00:03:52.295 --rc geninfo_unexecuted_blocks=1 00:03:52.295 00:03:52.295 ' 00:03:52.295 21:00:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3699849 00:03:52.295 21:00:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.295 21:00:39 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:52.295 21:00:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3699849 00:03:52.295 21:00:39 rpc -- common/autotest_common.sh@835 -- # '[' -z 3699849 ']' 00:03:52.295 21:00:39 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.295 21:00:39 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:52.295 21:00:39 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.554 21:00:39 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:52.554 21:00:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.554 [2024-11-26 21:00:39.773858] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:03:52.554 [2024-11-26 21:00:39.773901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3699849 ] 00:03:52.554 [2024-11-26 21:00:39.829640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.554 [2024-11-26 21:00:39.867936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:52.554 [2024-11-26 21:00:39.867971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3699849' to capture a snapshot of events at runtime. 00:03:52.554 [2024-11-26 21:00:39.867978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:52.554 [2024-11-26 21:00:39.867984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:52.554 [2024-11-26 21:00:39.867988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3699849 for offline analysis/debug. 00:03:52.554 [2024-11-26 21:00:39.868452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.814 21:00:40 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:52.814 21:00:40 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:52.814 21:00:40 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:52.814 21:00:40 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:52.814 21:00:40 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:52.814 21:00:40 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:52.814 21:00:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.814 21:00:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.814 21:00:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.814 ************************************ 00:03:52.814 START TEST rpc_integrity 00:03:52.814 ************************************ 00:03:52.814 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:52.814 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:52.814 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.814 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.814 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.814 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:52.814 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:52.814 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:52.814 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:52.814 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.814 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.814 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.814 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:52.814 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:52.814 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.814 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.814 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.814 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:52.814 { 00:03:52.814 "name": "Malloc0", 00:03:52.814 "aliases": [ 00:03:52.814 "5a9109d1-2153-41a6-b831-ef0303488513" 00:03:52.815 ], 00:03:52.815 "product_name": "Malloc disk", 00:03:52.815 "block_size": 512, 00:03:52.815 "num_blocks": 16384, 00:03:52.815 "uuid": "5a9109d1-2153-41a6-b831-ef0303488513", 00:03:52.815 "assigned_rate_limits": { 00:03:52.815 "rw_ios_per_sec": 0, 00:03:52.815 "rw_mbytes_per_sec": 0, 00:03:52.815 "r_mbytes_per_sec": 0, 00:03:52.815 "w_mbytes_per_sec": 0 00:03:52.815 }, 00:03:52.815 "claimed": false, 00:03:52.815 "zoned": false, 00:03:52.815 "supported_io_types": { 00:03:52.815 "read": true, 00:03:52.815 "write": true, 00:03:52.815 "unmap": true, 00:03:52.815 "flush": true, 00:03:52.815 "reset": true, 00:03:52.815 "nvme_admin": false, 00:03:52.815 "nvme_io": false, 00:03:52.815 "nvme_io_md": false, 00:03:52.815 "write_zeroes": true, 00:03:52.815 "zcopy": true, 00:03:52.815 "get_zone_info": false, 00:03:52.815 "zone_management": false, 00:03:52.815 "zone_append": false, 00:03:52.815 "compare": false, 00:03:52.815 "compare_and_write": false, 00:03:52.815 "abort": true, 00:03:52.815 "seek_hole": false, 00:03:52.815 "seek_data": false, 00:03:52.815 "copy": true, 00:03:52.815 "nvme_iov_md": false 00:03:52.815 }, 00:03:52.815 "memory_domains": [ 00:03:52.815 { 00:03:52.815 "dma_device_id": "system", 00:03:52.815 "dma_device_type": 1 00:03:52.815 }, 00:03:52.815 { 00:03:52.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.815 "dma_device_type": 2 00:03:52.815 } 00:03:52.815 ], 00:03:52.815 "driver_specific": {} 00:03:52.815 } 00:03:52.815 ]' 00:03:52.815 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:52.815 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:52.815 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:52.815 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.815 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.815 [2024-11-26 21:00:40.220414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:52.815 [2024-11-26 21:00:40.220442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:52.815 [2024-11-26 21:00:40.220452] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10703c0 00:03:52.815 [2024-11-26 21:00:40.220457] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:52.815 [2024-11-26 21:00:40.221476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:52.815 [2024-11-26 21:00:40.221496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:52.815 Passthru0 00:03:52.815 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.815 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:52.815 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:52.815 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:52.815 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:52.815 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:52.815 { 00:03:52.815 "name": "Malloc0", 00:03:52.815 "aliases": [ 00:03:52.815 "5a9109d1-2153-41a6-b831-ef0303488513" 00:03:52.815 ], 00:03:52.815 "product_name": "Malloc disk", 00:03:52.815 "block_size": 512, 00:03:52.815 "num_blocks": 16384, 00:03:52.815 "uuid": "5a9109d1-2153-41a6-b831-ef0303488513", 00:03:52.815 "assigned_rate_limits": { 00:03:52.815 "rw_ios_per_sec": 0, 00:03:52.815 "rw_mbytes_per_sec": 0, 00:03:52.815 "r_mbytes_per_sec": 0, 00:03:52.815 "w_mbytes_per_sec": 0 00:03:52.815 }, 00:03:52.815 "claimed": true, 00:03:52.815 "claim_type": "exclusive_write", 00:03:52.815 "zoned": false, 00:03:52.815 "supported_io_types": { 00:03:52.815 "read": true, 00:03:52.815 "write": true, 00:03:52.815 "unmap": true, 00:03:52.815 "flush": true, 00:03:52.815 "reset": true, 00:03:52.815 "nvme_admin": false, 00:03:52.815 "nvme_io": false, 00:03:52.815 "nvme_io_md": false, 00:03:52.815 "write_zeroes": true, 00:03:52.815 "zcopy": true, 00:03:52.815 "get_zone_info": false, 00:03:52.815 "zone_management": false, 00:03:52.815 "zone_append": false, 00:03:52.815 "compare": false, 00:03:52.815 "compare_and_write": false, 00:03:52.815 "abort": true, 00:03:52.815 "seek_hole": false, 00:03:52.815 "seek_data": false, 00:03:52.815 "copy": true, 00:03:52.815 "nvme_iov_md": false 00:03:52.815 }, 00:03:52.815 "memory_domains": [ 00:03:52.815 { 00:03:52.815 "dma_device_id": "system", 00:03:52.815 "dma_device_type": 1 00:03:52.815 }, 00:03:52.815 { 00:03:52.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.815 "dma_device_type": 2 00:03:52.815 } 00:03:52.815 ], 00:03:52.815 "driver_specific": {} 00:03:52.815 }, 00:03:52.815 { 00:03:52.815 "name": "Passthru0", 00:03:52.815 "aliases": [ 00:03:52.815 "f22fe54a-0b05-5225-8f11-bb47dcdf420f" 00:03:52.815 ], 00:03:52.815 "product_name": "passthru", 00:03:52.815 "block_size": 512, 00:03:52.815 "num_blocks": 16384, 00:03:52.815 "uuid": "f22fe54a-0b05-5225-8f11-bb47dcdf420f", 00:03:52.815 "assigned_rate_limits": { 00:03:52.815 "rw_ios_per_sec": 0, 00:03:52.815 "rw_mbytes_per_sec": 0, 00:03:52.815 "r_mbytes_per_sec": 0, 00:03:52.815 "w_mbytes_per_sec": 0 00:03:52.815 }, 00:03:52.815 "claimed": false, 00:03:52.815 "zoned": false, 00:03:52.815 "supported_io_types": { 00:03:52.815 "read": true, 00:03:52.815 "write": true, 00:03:52.815 "unmap": true, 00:03:52.815 "flush": true, 00:03:52.815 "reset": true, 00:03:52.815 "nvme_admin": false, 00:03:52.815 "nvme_io": false, 00:03:52.815 "nvme_io_md": false, 00:03:52.815 "write_zeroes": true, 00:03:52.815 "zcopy": true, 00:03:52.815 "get_zone_info": false, 00:03:52.815 "zone_management": false, 00:03:52.815 "zone_append": false, 00:03:52.815 "compare": false, 00:03:52.815 "compare_and_write": false, 00:03:52.815 "abort": true, 00:03:52.815 "seek_hole": false, 00:03:52.815 "seek_data": false, 00:03:52.815 "copy": true, 00:03:52.815 "nvme_iov_md": false 00:03:52.815 }, 00:03:52.815 "memory_domains": [ 00:03:52.815 { 00:03:52.815 "dma_device_id": "system", 00:03:52.815 "dma_device_type": 1 00:03:52.815 }, 00:03:52.815 { 00:03:52.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.815 "dma_device_type": 2 00:03:52.815 } 00:03:52.815 ], 00:03:52.815 "driver_specific": { 00:03:52.815 "passthru": { 00:03:52.815 "name": "Passthru0", 00:03:52.815 "base_bdev_name": "Malloc0" 00:03:52.815 } 00:03:52.815 } 00:03:52.815 } 00:03:52.815 ]' 00:03:52.815 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:53.075 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:53.075 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:53.075 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.075 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.075 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.075 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:53.075 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.075 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.075 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.075 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:53.075 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.075 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.075 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.075 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:53.075 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:53.075 21:00:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:53.075 00:03:53.075 real 0m0.257s 00:03:53.075 user 0m0.175s 00:03:53.075 sys 0m0.035s 00:03:53.075 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.075 21:00:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.075 ************************************ 00:03:53.075 END TEST rpc_integrity 00:03:53.075 ************************************ 00:03:53.075 21:00:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:53.075 21:00:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.075 21:00:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.075 21:00:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.075 ************************************ 00:03:53.075 START TEST rpc_plugins 00:03:53.075 ************************************ 00:03:53.075 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:53.075 21:00:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:53.075 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.075 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.075 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.075 21:00:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:53.075 21:00:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:53.075 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.075 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.076 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.076 21:00:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:53.076 { 00:03:53.076 "name": "Malloc1", 00:03:53.076 "aliases": [ 00:03:53.076 "28d4c002-7175-4fa2-a813-ab35aaeec0a0" 00:03:53.076 ], 00:03:53.076 "product_name": "Malloc disk", 00:03:53.076 "block_size": 4096, 00:03:53.076 "num_blocks": 256, 00:03:53.076 "uuid": "28d4c002-7175-4fa2-a813-ab35aaeec0a0", 00:03:53.076 "assigned_rate_limits": { 00:03:53.076 "rw_ios_per_sec": 0, 00:03:53.076 "rw_mbytes_per_sec": 0, 00:03:53.076 "r_mbytes_per_sec": 0, 00:03:53.076 "w_mbytes_per_sec": 0 00:03:53.076 }, 00:03:53.076 "claimed": false, 00:03:53.076 "zoned": false, 00:03:53.076 "supported_io_types": { 00:03:53.076 "read": true, 00:03:53.076 "write": true, 00:03:53.076 "unmap": true, 00:03:53.076 "flush": true, 00:03:53.076 "reset": true, 00:03:53.076 "nvme_admin": false, 00:03:53.076 "nvme_io": false, 00:03:53.076 "nvme_io_md": false, 00:03:53.076 "write_zeroes": true, 00:03:53.076 "zcopy": true, 00:03:53.076 "get_zone_info": false, 00:03:53.076 "zone_management": false, 00:03:53.076 "zone_append": false, 00:03:53.076 "compare": false, 00:03:53.076 "compare_and_write": false, 00:03:53.076 "abort": true, 00:03:53.076 "seek_hole": false, 00:03:53.076 "seek_data": false, 00:03:53.076 "copy": true, 00:03:53.076 "nvme_iov_md": false 00:03:53.076 }, 00:03:53.076 "memory_domains": [ 00:03:53.076 { 00:03:53.076 "dma_device_id": "system", 00:03:53.076 "dma_device_type": 1 00:03:53.076 }, 00:03:53.076 { 00:03:53.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.076 "dma_device_type": 2 00:03:53.076 } 00:03:53.076 ], 00:03:53.076 "driver_specific": {} 00:03:53.076 } 00:03:53.076 ]' 00:03:53.076 21:00:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:53.076 21:00:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:53.076 21:00:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:53.076 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.076 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.076 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.335 21:00:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:53.335 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.335 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.335 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.335 21:00:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:53.335 21:00:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:53.335 21:00:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:53.335 00:03:53.335 real 0m0.129s 00:03:53.335 user 0m0.074s 00:03:53.335 sys 0m0.021s 00:03:53.335 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.335 21:00:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:53.335 ************************************ 00:03:53.335 END TEST rpc_plugins 00:03:53.335 ************************************ 00:03:53.335 21:00:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:53.335 21:00:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.335 21:00:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.335 21:00:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.335 ************************************ 00:03:53.335 START TEST rpc_trace_cmd_test 00:03:53.335 ************************************ 00:03:53.335 21:00:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:53.335 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:53.335 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:53.335 21:00:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.335 21:00:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:53.335 21:00:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.335 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:53.335 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3699849", 00:03:53.335 "tpoint_group_mask": "0x8", 00:03:53.335 "iscsi_conn": { 00:03:53.335 "mask": "0x2", 00:03:53.335 "tpoint_mask": "0x0" 00:03:53.335 }, 00:03:53.335 "scsi": { 00:03:53.335 "mask": "0x4", 00:03:53.335 "tpoint_mask": "0x0" 00:03:53.335 }, 00:03:53.335 "bdev": { 00:03:53.335 "mask": "0x8", 00:03:53.335 "tpoint_mask": "0xffffffffffffffff" 00:03:53.335 }, 00:03:53.335 "nvmf_rdma": { 00:03:53.335 "mask": "0x10", 00:03:53.335 "tpoint_mask": "0x0" 00:03:53.335 }, 00:03:53.335 "nvmf_tcp": { 00:03:53.335 "mask": "0x20", 00:03:53.335 "tpoint_mask": "0x0" 00:03:53.335 }, 00:03:53.335 "ftl": { 00:03:53.335 "mask": "0x40", 00:03:53.335 "tpoint_mask": "0x0" 00:03:53.335 }, 00:03:53.335 "blobfs": { 00:03:53.335 "mask": "0x80", 00:03:53.335 "tpoint_mask": "0x0" 00:03:53.335 }, 00:03:53.335 "dsa": { 00:03:53.335 "mask": "0x200", 00:03:53.335 "tpoint_mask": "0x0" 00:03:53.335 }, 00:03:53.335 "thread": { 00:03:53.335 "mask": "0x400", 00:03:53.335 "tpoint_mask": "0x0" 00:03:53.335 }, 00:03:53.335 "nvme_pcie": { 00:03:53.335 "mask": "0x800", 00:03:53.335 "tpoint_mask": "0x0" 00:03:53.336 }, 00:03:53.336 "iaa": { 00:03:53.336 "mask": "0x1000", 00:03:53.336 "tpoint_mask": "0x0" 00:03:53.336 }, 00:03:53.336 "nvme_tcp": { 00:03:53.336 "mask": "0x2000", 00:03:53.336 "tpoint_mask": "0x0" 00:03:53.336 }, 00:03:53.336 "bdev_nvme": { 00:03:53.336 "mask": "0x4000", 00:03:53.336 "tpoint_mask": "0x0" 00:03:53.336 }, 00:03:53.336 "sock": { 00:03:53.336 "mask": "0x8000", 00:03:53.336 "tpoint_mask": "0x0" 00:03:53.336 }, 00:03:53.336 "blob": { 00:03:53.336 "mask": "0x10000", 00:03:53.336 "tpoint_mask": "0x0" 00:03:53.336 }, 00:03:53.336 "bdev_raid": { 00:03:53.336 "mask": "0x20000", 00:03:53.336 "tpoint_mask": "0x0" 00:03:53.336 }, 00:03:53.336 "scheduler": { 00:03:53.336 "mask": "0x40000", 00:03:53.336 "tpoint_mask": "0x0" 00:03:53.336 } 00:03:53.336 }' 00:03:53.336 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:53.336 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:53.336 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:53.336 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:53.336 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:53.336 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:53.336 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:53.594 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:53.594 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:53.594 21:00:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:53.594 00:03:53.594 real 0m0.203s 00:03:53.594 user 0m0.163s 00:03:53.594 sys 0m0.034s 00:03:53.594 21:00:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.594 21:00:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:53.594 ************************************ 00:03:53.594 END TEST rpc_trace_cmd_test 00:03:53.594 ************************************ 00:03:53.594 21:00:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:53.594 21:00:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:53.594 21:00:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:53.594 21:00:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.594 21:00:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.594 21:00:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:53.595 ************************************ 00:03:53.595 START TEST rpc_daemon_integrity 00:03:53.595 ************************************ 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:53.595 { 00:03:53.595 "name": "Malloc2", 00:03:53.595 "aliases": [ 00:03:53.595 "a3cecd51-e482-4766-87a6-2f935f16ad39" 00:03:53.595 ], 00:03:53.595 "product_name": "Malloc disk", 00:03:53.595 "block_size": 512, 00:03:53.595 "num_blocks": 16384, 00:03:53.595 "uuid": "a3cecd51-e482-4766-87a6-2f935f16ad39", 00:03:53.595 "assigned_rate_limits": { 00:03:53.595 "rw_ios_per_sec": 0, 00:03:53.595 "rw_mbytes_per_sec": 0, 00:03:53.595 "r_mbytes_per_sec": 0, 00:03:53.595 "w_mbytes_per_sec": 0 00:03:53.595 }, 00:03:53.595 "claimed": false, 00:03:53.595 "zoned": false, 00:03:53.595 "supported_io_types": { 00:03:53.595 "read": true, 00:03:53.595 "write": true, 00:03:53.595 "unmap": true, 00:03:53.595 "flush": true, 00:03:53.595 "reset": true, 00:03:53.595 "nvme_admin": false, 00:03:53.595 "nvme_io": false, 00:03:53.595 "nvme_io_md": false, 00:03:53.595 "write_zeroes": true, 00:03:53.595 "zcopy": true, 00:03:53.595 "get_zone_info": false, 00:03:53.595 "zone_management": false, 00:03:53.595 "zone_append": false, 00:03:53.595 "compare": false, 00:03:53.595 "compare_and_write": false, 00:03:53.595 "abort": true, 00:03:53.595 "seek_hole": false, 00:03:53.595 "seek_data": false, 00:03:53.595 "copy": true, 00:03:53.595 "nvme_iov_md": false 00:03:53.595 }, 00:03:53.595 "memory_domains": [ 00:03:53.595 { 00:03:53.595 "dma_device_id": "system", 00:03:53.595 "dma_device_type": 1 00:03:53.595 }, 00:03:53.595 { 00:03:53.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.595 "dma_device_type": 2 00:03:53.595 } 00:03:53.595 ], 00:03:53.595 "driver_specific": {} 00:03:53.595 } 00:03:53.595 ]' 00:03:53.595 21:00:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:53.595 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:53.595 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:53.595 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.595 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.595 [2024-11-26 21:00:41.014504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:53.595 [2024-11-26 21:00:41.014530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:53.595 [2024-11-26 21:00:41.014543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1072290 00:03:53.595 [2024-11-26 21:00:41.014549] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:53.595 [2024-11-26 21:00:41.015475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:53.595 [2024-11-26 21:00:41.015494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:53.595 Passthru0 00:03:53.595 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.595 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:53.595 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.595 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:53.920 { 00:03:53.920 "name": "Malloc2", 00:03:53.920 "aliases": [ 00:03:53.920 "a3cecd51-e482-4766-87a6-2f935f16ad39" 00:03:53.920 ], 00:03:53.920 "product_name": "Malloc disk", 00:03:53.920 "block_size": 512, 00:03:53.920 "num_blocks": 16384, 00:03:53.920 "uuid": "a3cecd51-e482-4766-87a6-2f935f16ad39", 00:03:53.920 "assigned_rate_limits": { 00:03:53.920 "rw_ios_per_sec": 0, 00:03:53.920 "rw_mbytes_per_sec": 0, 00:03:53.920 "r_mbytes_per_sec": 0, 00:03:53.920 "w_mbytes_per_sec": 0 00:03:53.920 }, 00:03:53.920 "claimed": true, 00:03:53.920 "claim_type": "exclusive_write", 00:03:53.920 "zoned": false, 00:03:53.920 "supported_io_types": { 00:03:53.920 "read": true, 00:03:53.920 "write": true, 00:03:53.920 "unmap": true, 00:03:53.920 "flush": true, 00:03:53.920 "reset": true, 00:03:53.920 "nvme_admin": false, 00:03:53.920 "nvme_io": false, 00:03:53.920 "nvme_io_md": false, 00:03:53.920 "write_zeroes": true, 00:03:53.920 "zcopy": true, 00:03:53.920 "get_zone_info": false, 00:03:53.920 "zone_management": false, 00:03:53.920 "zone_append": false, 00:03:53.920 "compare": false, 00:03:53.920 "compare_and_write": false, 00:03:53.920 "abort": true, 00:03:53.920 "seek_hole": false, 00:03:53.920 "seek_data": false, 00:03:53.920 "copy": true, 00:03:53.920 "nvme_iov_md": false 00:03:53.920 }, 00:03:53.920 "memory_domains": [ 00:03:53.920 { 00:03:53.920 "dma_device_id": "system", 00:03:53.920 "dma_device_type": 1 00:03:53.920 }, 00:03:53.920 { 00:03:53.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.920 "dma_device_type": 2 00:03:53.920 } 00:03:53.920 ], 00:03:53.920 "driver_specific": {} 00:03:53.920 }, 00:03:53.920 { 00:03:53.920 "name": "Passthru0", 00:03:53.920 "aliases": [ 00:03:53.920 "a4ce395e-fb97-5aa6-87e9-32ba74ab11de" 00:03:53.920 ], 00:03:53.920 "product_name": "passthru", 00:03:53.920 "block_size": 512, 00:03:53.920 "num_blocks": 16384, 00:03:53.920 "uuid": "a4ce395e-fb97-5aa6-87e9-32ba74ab11de", 00:03:53.920 "assigned_rate_limits": { 00:03:53.920 "rw_ios_per_sec": 0, 00:03:53.920 "rw_mbytes_per_sec": 0, 00:03:53.920 "r_mbytes_per_sec": 0, 00:03:53.920 "w_mbytes_per_sec": 0 00:03:53.920 }, 00:03:53.920 "claimed": false, 00:03:53.920 "zoned": false, 00:03:53.920 "supported_io_types": { 00:03:53.920 "read": true, 00:03:53.920 "write": true, 00:03:53.920 "unmap": true, 00:03:53.920 "flush": true, 00:03:53.920 "reset": true, 00:03:53.920 "nvme_admin": false, 00:03:53.920 "nvme_io": false, 00:03:53.920 "nvme_io_md": false, 00:03:53.920 "write_zeroes": true, 00:03:53.920 "zcopy": true, 00:03:53.920 "get_zone_info": false, 00:03:53.920 "zone_management": false, 00:03:53.920 "zone_append": false, 00:03:53.920 "compare": false, 00:03:53.920 "compare_and_write": false, 00:03:53.920 "abort": true, 00:03:53.920 "seek_hole": false, 00:03:53.920 "seek_data": false, 00:03:53.920 "copy": true, 00:03:53.920 "nvme_iov_md": false 00:03:53.920 }, 00:03:53.920 "memory_domains": [ 00:03:53.920 { 00:03:53.920 "dma_device_id": "system", 00:03:53.920 "dma_device_type": 1 00:03:53.920 }, 00:03:53.920 { 00:03:53.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:53.920 "dma_device_type": 2 00:03:53.920 } 00:03:53.920 ], 00:03:53.920 "driver_specific": { 00:03:53.920 "passthru": { 00:03:53.920 "name": "Passthru0", 00:03:53.920 "base_bdev_name": "Malloc2" 00:03:53.920 } 00:03:53.920 } 00:03:53.920 } 00:03:53.920 ]' 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:53.920 00:03:53.920 real 0m0.253s 00:03:53.920 user 0m0.164s 00:03:53.920 sys 0m0.037s 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.920 21:00:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:53.920 ************************************ 00:03:53.920 END TEST rpc_daemon_integrity 00:03:53.920 ************************************ 00:03:53.920 21:00:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:53.920 21:00:41 rpc -- rpc/rpc.sh@84 -- # killprocess 3699849 00:03:53.920 21:00:41 rpc -- common/autotest_common.sh@954 -- # '[' -z 3699849 ']' 00:03:53.920 21:00:41 rpc -- common/autotest_common.sh@958 -- # kill -0 3699849 00:03:53.920 21:00:41 rpc -- common/autotest_common.sh@959 -- # uname 00:03:53.920 21:00:41 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:53.920 21:00:41 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3699849 00:03:53.920 21:00:41 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:53.920 21:00:41 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:53.920 21:00:41 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3699849' 00:03:53.920 killing process with pid 3699849 00:03:53.920 21:00:41 rpc -- common/autotest_common.sh@973 -- # kill 3699849 00:03:53.920 21:00:41 rpc -- common/autotest_common.sh@978 -- # wait 3699849 00:03:54.181 00:03:54.181 real 0m1.960s 00:03:54.181 user 0m2.525s 00:03:54.181 sys 0m0.644s 00:03:54.181 21:00:41 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.181 21:00:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.181 ************************************ 00:03:54.181 END TEST rpc 00:03:54.181 ************************************ 00:03:54.181 21:00:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:54.181 21:00:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.181 21:00:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.181 21:00:41 -- common/autotest_common.sh@10 -- # set +x 00:03:54.181 ************************************ 00:03:54.181 START TEST skip_rpc 00:03:54.181 ************************************ 00:03:54.181 21:00:41 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:54.473 * Looking for test storage... 00:03:54.473 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.473 21:00:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:54.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.473 --rc genhtml_branch_coverage=1 00:03:54.473 --rc genhtml_function_coverage=1 00:03:54.473 --rc genhtml_legend=1 00:03:54.473 --rc geninfo_all_blocks=1 00:03:54.473 --rc geninfo_unexecuted_blocks=1 00:03:54.473 00:03:54.473 ' 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:54.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.473 --rc genhtml_branch_coverage=1 00:03:54.473 --rc genhtml_function_coverage=1 00:03:54.473 --rc genhtml_legend=1 00:03:54.473 --rc geninfo_all_blocks=1 00:03:54.473 --rc geninfo_unexecuted_blocks=1 00:03:54.473 00:03:54.473 ' 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:54.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.473 --rc genhtml_branch_coverage=1 00:03:54.473 --rc genhtml_function_coverage=1 00:03:54.473 --rc genhtml_legend=1 00:03:54.473 --rc geninfo_all_blocks=1 00:03:54.473 --rc geninfo_unexecuted_blocks=1 00:03:54.473 00:03:54.473 ' 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:54.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.473 --rc genhtml_branch_coverage=1 00:03:54.473 --rc genhtml_function_coverage=1 00:03:54.473 --rc genhtml_legend=1 00:03:54.473 --rc geninfo_all_blocks=1 00:03:54.473 --rc geninfo_unexecuted_blocks=1 00:03:54.473 00:03:54.473 ' 00:03:54.473 21:00:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:54.473 21:00:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:54.473 21:00:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.473 21:00:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.473 ************************************ 00:03:54.473 START TEST skip_rpc 00:03:54.473 ************************************ 00:03:54.473 21:00:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:54.473 21:00:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3700303 00:03:54.473 21:00:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:54.473 21:00:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:54.473 21:00:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:54.473 [2024-11-26 21:00:41.828313] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:03:54.473 [2024-11-26 21:00:41.828350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3700303 ] 00:03:54.473 [2024-11-26 21:00:41.885202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.732 [2024-11-26 21:00:41.923164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3700303 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3700303 ']' 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3700303 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3700303 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3700303' 00:04:00.003 killing process with pid 3700303 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3700303 00:04:00.003 21:00:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3700303 00:04:00.003 00:04:00.003 real 0m5.344s 00:04:00.003 user 0m5.115s 00:04:00.003 sys 0m0.260s 00:04:00.003 21:00:47 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.003 21:00:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.003 ************************************ 00:04:00.003 END TEST skip_rpc 00:04:00.003 ************************************ 00:04:00.003 21:00:47 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:00.003 21:00:47 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.003 21:00:47 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.003 21:00:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.003 ************************************ 00:04:00.003 START TEST skip_rpc_with_json 00:04:00.003 ************************************ 00:04:00.003 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:00.003 21:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:00.003 21:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3701379 00:04:00.004 21:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:00.004 21:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3701379 00:04:00.004 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3701379 ']' 00:04:00.004 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.004 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:00.004 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.004 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:00.004 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.004 21:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:00.004 [2024-11-26 21:00:47.230989] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:00.004 [2024-11-26 21:00:47.231027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3701379 ] 00:04:00.004 [2024-11-26 21:00:47.287797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.004 [2024-11-26 21:00:47.326589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.263 [2024-11-26 21:00:47.529428] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:00.263 request: 00:04:00.263 { 00:04:00.263 "trtype": "tcp", 00:04:00.263 "method": "nvmf_get_transports", 00:04:00.263 "req_id": 1 00:04:00.263 } 00:04:00.263 Got JSON-RPC error response 00:04:00.263 response: 00:04:00.263 { 00:04:00.263 "code": -19, 00:04:00.263 "message": "No such device" 00:04:00.263 } 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.263 [2024-11-26 21:00:47.537519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.263 21:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:00.263 { 00:04:00.263 "subsystems": [ 00:04:00.263 { 00:04:00.263 "subsystem": "fsdev", 00:04:00.263 "config": [ 00:04:00.263 { 00:04:00.263 "method": "fsdev_set_opts", 00:04:00.263 "params": { 00:04:00.263 "fsdev_io_pool_size": 65535, 00:04:00.263 "fsdev_io_cache_size": 256 00:04:00.263 } 00:04:00.263 } 00:04:00.263 ] 00:04:00.263 }, 00:04:00.263 { 00:04:00.263 "subsystem": "keyring", 00:04:00.263 "config": [] 00:04:00.263 }, 00:04:00.263 { 00:04:00.263 "subsystem": "iobuf", 00:04:00.263 "config": [ 00:04:00.263 { 00:04:00.263 "method": "iobuf_set_options", 00:04:00.263 "params": { 00:04:00.263 "small_pool_count": 8192, 00:04:00.263 "large_pool_count": 1024, 00:04:00.263 "small_bufsize": 8192, 00:04:00.263 "large_bufsize": 135168, 00:04:00.263 "enable_numa": false 00:04:00.263 } 00:04:00.263 } 00:04:00.263 ] 00:04:00.263 }, 00:04:00.263 { 00:04:00.263 "subsystem": "sock", 00:04:00.263 "config": [ 00:04:00.263 { 00:04:00.263 "method": "sock_set_default_impl", 00:04:00.263 "params": { 00:04:00.263 "impl_name": "posix" 00:04:00.263 } 00:04:00.263 }, 00:04:00.263 { 00:04:00.263 "method": "sock_impl_set_options", 00:04:00.263 "params": { 00:04:00.263 "impl_name": "ssl", 00:04:00.263 "recv_buf_size": 4096, 00:04:00.263 "send_buf_size": 4096, 00:04:00.263 "enable_recv_pipe": true, 00:04:00.263 "enable_quickack": false, 00:04:00.263 "enable_placement_id": 0, 00:04:00.263 "enable_zerocopy_send_server": true, 00:04:00.263 "enable_zerocopy_send_client": false, 00:04:00.263 "zerocopy_threshold": 0, 00:04:00.263 "tls_version": 0, 00:04:00.264 "enable_ktls": false 00:04:00.264 } 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "method": "sock_impl_set_options", 00:04:00.264 "params": { 00:04:00.264 "impl_name": "posix", 00:04:00.264 "recv_buf_size": 2097152, 00:04:00.264 "send_buf_size": 2097152, 00:04:00.264 "enable_recv_pipe": true, 00:04:00.264 "enable_quickack": false, 00:04:00.264 "enable_placement_id": 0, 00:04:00.264 "enable_zerocopy_send_server": true, 00:04:00.264 "enable_zerocopy_send_client": false, 00:04:00.264 "zerocopy_threshold": 0, 00:04:00.264 "tls_version": 0, 00:04:00.264 "enable_ktls": false 00:04:00.264 } 00:04:00.264 } 00:04:00.264 ] 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "subsystem": "vmd", 00:04:00.264 "config": [] 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "subsystem": "accel", 00:04:00.264 "config": [ 00:04:00.264 { 00:04:00.264 "method": "accel_set_options", 00:04:00.264 "params": { 00:04:00.264 "small_cache_size": 128, 00:04:00.264 "large_cache_size": 16, 00:04:00.264 "task_count": 2048, 00:04:00.264 "sequence_count": 2048, 00:04:00.264 "buf_count": 2048 00:04:00.264 } 00:04:00.264 } 00:04:00.264 ] 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "subsystem": "bdev", 00:04:00.264 "config": [ 00:04:00.264 { 00:04:00.264 "method": "bdev_set_options", 00:04:00.264 "params": { 00:04:00.264 "bdev_io_pool_size": 65535, 00:04:00.264 "bdev_io_cache_size": 256, 00:04:00.264 "bdev_auto_examine": true, 00:04:00.264 "iobuf_small_cache_size": 128, 00:04:00.264 "iobuf_large_cache_size": 16 00:04:00.264 } 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "method": "bdev_raid_set_options", 00:04:00.264 "params": { 00:04:00.264 "process_window_size_kb": 1024, 00:04:00.264 "process_max_bandwidth_mb_sec": 0 00:04:00.264 } 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "method": "bdev_iscsi_set_options", 00:04:00.264 "params": { 00:04:00.264 "timeout_sec": 30 00:04:00.264 } 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "method": "bdev_nvme_set_options", 00:04:00.264 "params": { 00:04:00.264 "action_on_timeout": "none", 00:04:00.264 "timeout_us": 0, 00:04:00.264 "timeout_admin_us": 0, 00:04:00.264 "keep_alive_timeout_ms": 10000, 00:04:00.264 "arbitration_burst": 0, 00:04:00.264 "low_priority_weight": 0, 00:04:00.264 "medium_priority_weight": 0, 00:04:00.264 "high_priority_weight": 0, 00:04:00.264 "nvme_adminq_poll_period_us": 10000, 00:04:00.264 "nvme_ioq_poll_period_us": 0, 00:04:00.264 "io_queue_requests": 0, 00:04:00.264 "delay_cmd_submit": true, 00:04:00.264 "transport_retry_count": 4, 00:04:00.264 "bdev_retry_count": 3, 00:04:00.264 "transport_ack_timeout": 0, 00:04:00.264 "ctrlr_loss_timeout_sec": 0, 00:04:00.264 "reconnect_delay_sec": 0, 00:04:00.264 "fast_io_fail_timeout_sec": 0, 00:04:00.264 "disable_auto_failback": false, 00:04:00.264 "generate_uuids": false, 00:04:00.264 "transport_tos": 0, 00:04:00.264 "nvme_error_stat": false, 00:04:00.264 "rdma_srq_size": 0, 00:04:00.264 "io_path_stat": false, 00:04:00.264 "allow_accel_sequence": false, 00:04:00.264 "rdma_max_cq_size": 0, 00:04:00.264 "rdma_cm_event_timeout_ms": 0, 00:04:00.264 "dhchap_digests": [ 00:04:00.264 "sha256", 00:04:00.264 "sha384", 00:04:00.264 "sha512" 00:04:00.264 ], 00:04:00.264 "dhchap_dhgroups": [ 00:04:00.264 "null", 00:04:00.264 "ffdhe2048", 00:04:00.264 "ffdhe3072", 00:04:00.264 "ffdhe4096", 00:04:00.264 "ffdhe6144", 00:04:00.264 "ffdhe8192" 00:04:00.264 ] 00:04:00.264 } 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "method": "bdev_nvme_set_hotplug", 00:04:00.264 "params": { 00:04:00.264 "period_us": 100000, 00:04:00.264 "enable": false 00:04:00.264 } 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "method": "bdev_wait_for_examine" 00:04:00.264 } 00:04:00.264 ] 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "subsystem": "scsi", 00:04:00.264 "config": null 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "subsystem": "scheduler", 00:04:00.264 "config": [ 00:04:00.264 { 00:04:00.264 "method": "framework_set_scheduler", 00:04:00.264 "params": { 00:04:00.264 "name": "static" 00:04:00.264 } 00:04:00.264 } 00:04:00.264 ] 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "subsystem": "vhost_scsi", 00:04:00.264 "config": [] 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "subsystem": "vhost_blk", 00:04:00.264 "config": [] 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "subsystem": "ublk", 00:04:00.264 "config": [] 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "subsystem": "nbd", 00:04:00.264 "config": [] 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "subsystem": "nvmf", 00:04:00.264 "config": [ 00:04:00.264 { 00:04:00.264 "method": "nvmf_set_config", 00:04:00.264 "params": { 00:04:00.264 "discovery_filter": "match_any", 00:04:00.264 "admin_cmd_passthru": { 00:04:00.264 "identify_ctrlr": false 00:04:00.264 }, 00:04:00.264 "dhchap_digests": [ 00:04:00.264 "sha256", 00:04:00.264 "sha384", 00:04:00.264 "sha512" 00:04:00.264 ], 00:04:00.264 "dhchap_dhgroups": [ 00:04:00.264 "null", 00:04:00.264 "ffdhe2048", 00:04:00.264 "ffdhe3072", 00:04:00.264 "ffdhe4096", 00:04:00.264 "ffdhe6144", 00:04:00.264 "ffdhe8192" 00:04:00.264 ] 00:04:00.264 } 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "method": "nvmf_set_max_subsystems", 00:04:00.264 "params": { 00:04:00.264 "max_subsystems": 1024 00:04:00.264 } 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "method": "nvmf_set_crdt", 00:04:00.264 "params": { 00:04:00.264 "crdt1": 0, 00:04:00.264 "crdt2": 0, 00:04:00.264 "crdt3": 0 00:04:00.264 } 00:04:00.264 }, 00:04:00.264 { 00:04:00.264 "method": "nvmf_create_transport", 00:04:00.264 "params": { 00:04:00.264 "trtype": "TCP", 00:04:00.264 "max_queue_depth": 128, 00:04:00.264 "max_io_qpairs_per_ctrlr": 127, 00:04:00.265 "in_capsule_data_size": 4096, 00:04:00.265 "max_io_size": 131072, 00:04:00.265 "io_unit_size": 131072, 00:04:00.265 "max_aq_depth": 128, 00:04:00.265 "num_shared_buffers": 511, 00:04:00.265 "buf_cache_size": 4294967295, 00:04:00.265 "dif_insert_or_strip": false, 00:04:00.265 "zcopy": false, 00:04:00.265 "c2h_success": true, 00:04:00.265 "sock_priority": 0, 00:04:00.265 "abort_timeout_sec": 1, 00:04:00.265 "ack_timeout": 0, 00:04:00.265 "data_wr_pool_size": 0 00:04:00.265 } 00:04:00.265 } 00:04:00.265 ] 00:04:00.265 }, 00:04:00.265 { 00:04:00.265 "subsystem": "iscsi", 00:04:00.265 "config": [ 00:04:00.265 { 00:04:00.265 "method": "iscsi_set_options", 00:04:00.265 "params": { 00:04:00.265 "node_base": "iqn.2016-06.io.spdk", 00:04:00.265 "max_sessions": 128, 00:04:00.265 "max_connections_per_session": 2, 00:04:00.265 "max_queue_depth": 64, 00:04:00.265 "default_time2wait": 2, 00:04:00.265 "default_time2retain": 20, 00:04:00.265 "first_burst_length": 8192, 00:04:00.265 "immediate_data": true, 00:04:00.265 "allow_duplicated_isid": false, 00:04:00.265 "error_recovery_level": 0, 00:04:00.265 "nop_timeout": 60, 00:04:00.265 "nop_in_interval": 30, 00:04:00.265 "disable_chap": false, 00:04:00.265 "require_chap": false, 00:04:00.265 "mutual_chap": false, 00:04:00.265 "chap_group": 0, 00:04:00.265 "max_large_datain_per_connection": 64, 00:04:00.265 "max_r2t_per_connection": 4, 00:04:00.265 "pdu_pool_size": 36864, 00:04:00.265 "immediate_data_pool_size": 16384, 00:04:00.265 "data_out_pool_size": 2048 00:04:00.265 } 00:04:00.265 } 00:04:00.265 ] 00:04:00.265 } 00:04:00.265 ] 00:04:00.265 } 00:04:00.265 21:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:00.265 21:00:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3701379 00:04:00.265 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3701379 ']' 00:04:00.265 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3701379 00:04:00.265 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:00.265 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:00.265 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3701379 00:04:00.524 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:00.524 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:00.524 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3701379' 00:04:00.524 killing process with pid 3701379 00:04:00.524 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3701379 00:04:00.524 21:00:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3701379 00:04:00.783 21:00:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3701643 00:04:00.783 21:00:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:00.783 21:00:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3701643 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3701643 ']' 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3701643 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3701643 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3701643' 00:04:06.057 killing process with pid 3701643 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3701643 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3701643 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:06.057 00:04:06.057 real 0m6.195s 00:04:06.057 user 0m5.906s 00:04:06.057 sys 0m0.542s 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:06.057 ************************************ 00:04:06.057 END TEST skip_rpc_with_json 00:04:06.057 ************************************ 00:04:06.057 21:00:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:06.057 21:00:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.057 21:00:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.057 21:00:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.057 ************************************ 00:04:06.057 START TEST skip_rpc_with_delay 00:04:06.057 ************************************ 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:06.057 21:00:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:06.058 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:06.058 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:06.058 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.058 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.058 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.058 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.058 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.058 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.058 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.058 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:06.058 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:06.317 [2024-11-26 21:00:53.498241] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:06.317 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:06.317 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:06.317 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:06.317 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:06.317 00:04:06.317 real 0m0.067s 00:04:06.317 user 0m0.043s 00:04:06.317 sys 0m0.024s 00:04:06.317 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.317 21:00:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:06.317 ************************************ 00:04:06.317 END TEST skip_rpc_with_delay 00:04:06.317 ************************************ 00:04:06.317 21:00:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:06.317 21:00:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:06.317 21:00:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:06.317 21:00:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.317 21:00:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.317 21:00:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.317 ************************************ 00:04:06.317 START TEST exit_on_failed_rpc_init 00:04:06.317 ************************************ 00:04:06.317 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:06.317 21:00:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3702748 00:04:06.317 21:00:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3702748 00:04:06.317 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3702748 ']' 00:04:06.317 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.317 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.317 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.317 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.317 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.317 21:00:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:06.317 [2024-11-26 21:00:53.609737] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:06.317 [2024-11-26 21:00:53.609773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702748 ] 00:04:06.317 [2024-11-26 21:00:53.666080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.317 [2024-11-26 21:00:53.704923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:06.577 21:00:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:06.577 [2024-11-26 21:00:53.955488] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:06.577 [2024-11-26 21:00:53.955529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3702757 ] 00:04:06.836 [2024-11-26 21:00:54.011753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.836 [2024-11-26 21:00:54.049024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.836 [2024-11-26 21:00:54.049076] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:06.836 [2024-11-26 21:00:54.049085] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:06.836 [2024-11-26 21:00:54.049090] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3702748 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3702748 ']' 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3702748 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3702748 00:04:06.836 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.837 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.837 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3702748' 00:04:06.837 killing process with pid 3702748 00:04:06.837 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3702748 00:04:06.837 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3702748 00:04:07.096 00:04:07.096 real 0m0.875s 00:04:07.096 user 0m0.925s 00:04:07.096 sys 0m0.348s 00:04:07.096 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.096 21:00:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:07.096 ************************************ 00:04:07.096 END TEST exit_on_failed_rpc_init 00:04:07.096 ************************************ 00:04:07.096 21:00:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:07.096 00:04:07.096 real 0m12.883s 00:04:07.096 user 0m12.167s 00:04:07.096 sys 0m1.421s 00:04:07.096 21:00:54 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.096 21:00:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.096 ************************************ 00:04:07.096 END TEST skip_rpc 00:04:07.096 ************************************ 00:04:07.096 21:00:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:07.096 21:00:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.096 21:00:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.096 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:04:07.356 ************************************ 00:04:07.356 START TEST rpc_client 00:04:07.356 ************************************ 00:04:07.356 21:00:54 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:07.356 * Looking for test storage... 00:04:07.356 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:07.356 21:00:54 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.356 21:00:54 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.356 21:00:54 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.356 21:00:54 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.356 21:00:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:07.356 21:00:54 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.356 21:00:54 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.356 --rc genhtml_branch_coverage=1 00:04:07.356 --rc genhtml_function_coverage=1 00:04:07.356 --rc genhtml_legend=1 00:04:07.356 --rc geninfo_all_blocks=1 00:04:07.356 --rc geninfo_unexecuted_blocks=1 00:04:07.356 00:04:07.356 ' 00:04:07.356 21:00:54 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.356 --rc genhtml_branch_coverage=1 00:04:07.357 --rc genhtml_function_coverage=1 00:04:07.357 --rc genhtml_legend=1 00:04:07.357 --rc geninfo_all_blocks=1 00:04:07.357 --rc geninfo_unexecuted_blocks=1 00:04:07.357 00:04:07.357 ' 00:04:07.357 21:00:54 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.357 --rc genhtml_branch_coverage=1 00:04:07.357 --rc genhtml_function_coverage=1 00:04:07.357 --rc genhtml_legend=1 00:04:07.357 --rc geninfo_all_blocks=1 00:04:07.357 --rc geninfo_unexecuted_blocks=1 00:04:07.357 00:04:07.357 ' 00:04:07.357 21:00:54 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.357 --rc genhtml_branch_coverage=1 00:04:07.357 --rc genhtml_function_coverage=1 00:04:07.357 --rc genhtml_legend=1 00:04:07.357 --rc geninfo_all_blocks=1 00:04:07.357 --rc geninfo_unexecuted_blocks=1 00:04:07.357 00:04:07.357 ' 00:04:07.357 21:00:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:07.357 OK 00:04:07.357 21:00:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:07.357 00:04:07.357 real 0m0.181s 00:04:07.357 user 0m0.119s 00:04:07.357 sys 0m0.073s 00:04:07.357 21:00:54 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.357 21:00:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:07.357 ************************************ 00:04:07.357 END TEST rpc_client 00:04:07.357 ************************************ 00:04:07.357 21:00:54 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:07.357 21:00:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.357 21:00:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.357 21:00:54 -- common/autotest_common.sh@10 -- # set +x 00:04:07.357 ************************************ 00:04:07.357 START TEST json_config 00:04:07.357 ************************************ 00:04:07.357 21:00:54 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:07.616 21:00:54 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.616 21:00:54 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.616 21:00:54 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.616 21:00:54 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.616 21:00:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.616 21:00:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.616 21:00:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.616 21:00:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.616 21:00:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.616 21:00:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.616 21:00:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.616 21:00:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.616 21:00:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.616 21:00:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.616 21:00:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.616 21:00:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:07.616 21:00:54 json_config -- scripts/common.sh@345 -- # : 1 00:04:07.616 21:00:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.616 21:00:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.616 21:00:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:07.616 21:00:54 json_config -- scripts/common.sh@353 -- # local d=1 00:04:07.616 21:00:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.616 21:00:54 json_config -- scripts/common.sh@355 -- # echo 1 00:04:07.616 21:00:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.617 21:00:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:07.617 21:00:54 json_config -- scripts/common.sh@353 -- # local d=2 00:04:07.617 21:00:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.617 21:00:54 json_config -- scripts/common.sh@355 -- # echo 2 00:04:07.617 21:00:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.617 21:00:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.617 21:00:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.617 21:00:54 json_config -- scripts/common.sh@368 -- # return 0 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.617 --rc genhtml_branch_coverage=1 00:04:07.617 --rc genhtml_function_coverage=1 00:04:07.617 --rc genhtml_legend=1 00:04:07.617 --rc geninfo_all_blocks=1 00:04:07.617 --rc geninfo_unexecuted_blocks=1 00:04:07.617 00:04:07.617 ' 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.617 --rc genhtml_branch_coverage=1 00:04:07.617 --rc genhtml_function_coverage=1 00:04:07.617 --rc genhtml_legend=1 00:04:07.617 --rc geninfo_all_blocks=1 00:04:07.617 --rc geninfo_unexecuted_blocks=1 00:04:07.617 00:04:07.617 ' 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.617 --rc genhtml_branch_coverage=1 00:04:07.617 --rc genhtml_function_coverage=1 00:04:07.617 --rc genhtml_legend=1 00:04:07.617 --rc geninfo_all_blocks=1 00:04:07.617 --rc geninfo_unexecuted_blocks=1 00:04:07.617 00:04:07.617 ' 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.617 --rc genhtml_branch_coverage=1 00:04:07.617 --rc genhtml_function_coverage=1 00:04:07.617 --rc genhtml_legend=1 00:04:07.617 --rc geninfo_all_blocks=1 00:04:07.617 --rc geninfo_unexecuted_blocks=1 00:04:07.617 00:04:07.617 ' 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:07.617 21:00:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.617 21:00:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.617 21:00:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.617 21:00:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.617 21:00:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.617 21:00:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.617 21:00:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.617 21:00:54 json_config -- paths/export.sh@5 -- # export PATH 00:04:07.617 21:00:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@51 -- # : 0 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.617 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.617 21:00:54 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:07.617 INFO: JSON configuration test init 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.617 21:00:54 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:07.617 21:00:54 json_config -- json_config/common.sh@9 -- # local app=target 00:04:07.617 21:00:54 json_config -- json_config/common.sh@10 -- # shift 00:04:07.617 21:00:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:07.617 21:00:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:07.617 21:00:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:07.617 21:00:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.617 21:00:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:07.617 21:00:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3703141 00:04:07.617 21:00:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:07.617 Waiting for target to run... 00:04:07.617 21:00:54 json_config -- json_config/common.sh@25 -- # waitforlisten 3703141 /var/tmp/spdk_tgt.sock 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@835 -- # '[' -z 3703141 ']' 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:07.617 21:00:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.617 21:00:54 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:07.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:07.618 21:00:54 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.618 21:00:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:07.618 [2024-11-26 21:00:55.020026] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:07.618 [2024-11-26 21:00:55.020073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3703141 ] 00:04:08.186 [2024-11-26 21:00:55.447090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.186 [2024-11-26 21:00:55.504160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.445 21:00:55 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:08.445 21:00:55 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:08.445 21:00:55 json_config -- json_config/common.sh@26 -- # echo '' 00:04:08.445 00:04:08.445 21:00:55 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:08.445 21:00:55 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:08.445 21:00:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.445 21:00:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.445 21:00:55 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:08.445 21:00:55 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:08.445 21:00:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.445 21:00:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.445 21:00:55 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:08.445 21:00:55 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:08.445 21:00:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:11.735 21:00:58 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:11.735 21:00:58 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:11.735 21:00:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.735 21:00:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.735 21:00:58 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:11.735 21:00:58 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:11.735 21:00:58 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:11.735 21:00:58 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:11.735 21:00:58 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:11.735 21:00:58 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:11.735 21:00:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:11.735 21:00:58 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@54 -- # sort 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:11.735 21:00:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.735 21:00:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:11.735 21:00:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.735 21:00:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:04:11.735 21:00:59 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:04:11.735 21:00:59 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:04:11.735 21:00:59 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:11.735 21:00:59 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:11.735 21:00:59 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:11.735 21:00:59 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:11.735 21:00:59 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:11.735 21:00:59 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:04:11.735 21:00:59 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:11.735 21:00:59 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:04:11.735 21:00:59 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:11.735 21:00:59 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:04:11.735 21:00:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@320 -- # e810=() 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@321 -- # x722=() 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@322 -- # mlx=() 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:04:17.005 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:04:17.005 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:04:17.005 Found net devices under 0000:18:00.0: mlx_0_0 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:17.005 21:01:04 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:04:17.005 Found net devices under 0000:18:00.1: mlx_0_1 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@62 -- # uname 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:04:17.006 21:01:04 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:04:17.264 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:17.264 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:04:17.264 altname enp24s0f0np0 00:04:17.264 altname ens785f0np0 00:04:17.264 inet 192.168.100.8/24 scope global mlx_0_0 00:04:17.264 valid_lft forever preferred_lft forever 00:04:17.264 21:01:04 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:04:17.265 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:17.265 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:04:17.265 altname enp24s0f1np1 00:04:17.265 altname ens785f1np1 00:04:17.265 inet 192.168.100.9/24 scope global mlx_0_1 00:04:17.265 valid_lft forever preferred_lft forever 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@450 -- # return 0 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@109 -- # continue 2 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:04:17.265 192.168.100.9' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:04:17.265 192.168.100.9' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@485 -- # head -n 1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:04:17.265 192.168.100.9' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@486 -- # head -n 1 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:04:17.265 21:01:04 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:04:17.265 21:01:04 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:04:17.265 21:01:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:17.265 21:01:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:17.523 MallocForNvmf0 00:04:17.523 21:01:04 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.523 21:01:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.523 MallocForNvmf1 00:04:17.782 21:01:04 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:04:17.782 21:01:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:04:17.782 [2024-11-26 21:01:05.121593] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:04:17.782 [2024-11-26 21:01:05.150851] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xeafd90/0xd847c0) succeed. 00:04:17.782 [2024-11-26 21:01:05.162140] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xeaedd0/0xe04480) succeed. 00:04:17.782 21:01:05 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:17.782 21:01:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:18.041 21:01:05 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:18.041 21:01:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:18.299 21:01:05 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:18.299 21:01:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:18.299 21:01:05 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:18.299 21:01:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:18.557 [2024-11-26 21:01:05.853748] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:18.557 21:01:05 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:18.557 21:01:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.558 21:01:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.558 21:01:05 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:18.558 21:01:05 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.558 21:01:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.558 21:01:05 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:18.558 21:01:05 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.558 21:01:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:18.817 MallocBdevForConfigChangeCheck 00:04:18.817 21:01:06 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:18.817 21:01:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.817 21:01:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.817 21:01:06 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:18.817 21:01:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.075 21:01:06 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:19.075 INFO: shutting down applications... 00:04:19.075 21:01:06 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:19.075 21:01:06 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:19.075 21:01:06 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:19.075 21:01:06 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:23.264 Calling clear_iscsi_subsystem 00:04:23.264 Calling clear_nvmf_subsystem 00:04:23.264 Calling clear_nbd_subsystem 00:04:23.264 Calling clear_ublk_subsystem 00:04:23.264 Calling clear_vhost_blk_subsystem 00:04:23.264 Calling clear_vhost_scsi_subsystem 00:04:23.264 Calling clear_bdev_subsystem 00:04:23.264 21:01:10 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:04:23.264 21:01:10 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:23.264 21:01:10 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:23.264 21:01:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:23.264 21:01:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:23.264 21:01:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:23.264 21:01:10 json_config -- json_config/json_config.sh@352 -- # break 00:04:23.264 21:01:10 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:23.264 21:01:10 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:23.264 21:01:10 json_config -- json_config/common.sh@31 -- # local app=target 00:04:23.264 21:01:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:23.264 21:01:10 json_config -- json_config/common.sh@35 -- # [[ -n 3703141 ]] 00:04:23.264 21:01:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3703141 00:04:23.264 21:01:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:23.264 21:01:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.264 21:01:10 json_config -- json_config/common.sh@41 -- # kill -0 3703141 00:04:23.264 21:01:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:23.833 21:01:11 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:23.833 21:01:11 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:23.833 21:01:11 json_config -- json_config/common.sh@41 -- # kill -0 3703141 00:04:23.833 21:01:11 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:23.833 21:01:11 json_config -- json_config/common.sh@43 -- # break 00:04:23.833 21:01:11 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:23.833 21:01:11 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:23.833 SPDK target shutdown done 00:04:23.833 21:01:11 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:23.833 INFO: relaunching applications... 00:04:23.833 21:01:11 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.833 21:01:11 json_config -- json_config/common.sh@9 -- # local app=target 00:04:23.833 21:01:11 json_config -- json_config/common.sh@10 -- # shift 00:04:23.833 21:01:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:23.833 21:01:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:23.833 21:01:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:23.833 21:01:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.833 21:01:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:23.833 21:01:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3708326 00:04:23.833 21:01:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:23.833 Waiting for target to run... 00:04:23.833 21:01:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:23.833 21:01:11 json_config -- json_config/common.sh@25 -- # waitforlisten 3708326 /var/tmp/spdk_tgt.sock 00:04:23.833 21:01:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 3708326 ']' 00:04:23.833 21:01:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:23.833 21:01:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.833 21:01:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:23.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:23.833 21:01:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.833 21:01:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.833 [2024-11-26 21:01:11.248831] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:23.833 [2024-11-26 21:01:11.248894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3708326 ] 00:04:24.401 [2024-11-26 21:01:11.673927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.401 [2024-11-26 21:01:11.728347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.692 [2024-11-26 21:01:14.782387] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x158ce70/0x15992c0) succeed. 00:04:27.692 [2024-11-26 21:01:14.791475] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15900c0/0x1619300) succeed. 00:04:27.692 [2024-11-26 21:01:14.839086] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:28.259 21:01:15 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.259 21:01:15 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:28.259 21:01:15 json_config -- json_config/common.sh@26 -- # echo '' 00:04:28.259 00:04:28.259 21:01:15 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:28.259 21:01:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:28.259 INFO: Checking if target configuration is the same... 00:04:28.259 21:01:15 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.259 21:01:15 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:28.259 21:01:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.259 + '[' 2 -ne 2 ']' 00:04:28.259 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:28.259 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:28.259 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:28.259 +++ basename /dev/fd/62 00:04:28.259 ++ mktemp /tmp/62.XXX 00:04:28.259 + tmp_file_1=/tmp/62.HTZ 00:04:28.259 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.259 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:28.259 + tmp_file_2=/tmp/spdk_tgt_config.json.a2n 00:04:28.259 + ret=0 00:04:28.259 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:28.518 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:28.518 + diff -u /tmp/62.HTZ /tmp/spdk_tgt_config.json.a2n 00:04:28.518 + echo 'INFO: JSON config files are the same' 00:04:28.518 INFO: JSON config files are the same 00:04:28.518 + rm /tmp/62.HTZ /tmp/spdk_tgt_config.json.a2n 00:04:28.518 + exit 0 00:04:28.518 21:01:15 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:28.518 21:01:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:28.518 INFO: changing configuration and checking if this can be detected... 00:04:28.518 21:01:15 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:28.518 21:01:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:28.518 21:01:15 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:28.518 21:01:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.518 21:01:15 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.518 + '[' 2 -ne 2 ']' 00:04:28.518 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:28.518 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:04:28.518 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:04:28.518 +++ basename /dev/fd/62 00:04:28.518 ++ mktemp /tmp/62.XXX 00:04:28.518 + tmp_file_1=/tmp/62.r6w 00:04:28.518 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:28.518 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:28.518 + tmp_file_2=/tmp/spdk_tgt_config.json.lPV 00:04:28.518 + ret=0 00:04:28.518 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:29.094 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:29.094 + diff -u /tmp/62.r6w /tmp/spdk_tgt_config.json.lPV 00:04:29.094 + ret=1 00:04:29.094 + echo '=== Start of file: /tmp/62.r6w ===' 00:04:29.094 + cat /tmp/62.r6w 00:04:29.094 + echo '=== End of file: /tmp/62.r6w ===' 00:04:29.094 + echo '' 00:04:29.094 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lPV ===' 00:04:29.094 + cat /tmp/spdk_tgt_config.json.lPV 00:04:29.094 + echo '=== End of file: /tmp/spdk_tgt_config.json.lPV ===' 00:04:29.094 + echo '' 00:04:29.094 + rm /tmp/62.r6w /tmp/spdk_tgt_config.json.lPV 00:04:29.094 + exit 1 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:29.094 INFO: configuration change detected. 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@324 -- # [[ -n 3708326 ]] 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.094 21:01:16 json_config -- json_config/json_config.sh@330 -- # killprocess 3708326 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@954 -- # '[' -z 3708326 ']' 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@958 -- # kill -0 3708326 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@959 -- # uname 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3708326 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3708326' 00:04:29.094 killing process with pid 3708326 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@973 -- # kill 3708326 00:04:29.094 21:01:16 json_config -- common/autotest_common.sh@978 -- # wait 3708326 00:04:33.286 21:01:20 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:04:33.286 21:01:20 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:33.286 21:01:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.286 21:01:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.286 21:01:20 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:33.286 21:01:20 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:33.286 INFO: Success 00:04:33.286 21:01:20 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:04:33.286 21:01:20 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:33.286 21:01:20 json_config -- nvmf/common.sh@121 -- # sync 00:04:33.286 21:01:20 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:04:33.286 21:01:20 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:04:33.286 21:01:20 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:04:33.286 21:01:20 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:33.286 21:01:20 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:04:33.286 00:04:33.286 real 0m25.521s 00:04:33.286 user 0m26.859s 00:04:33.286 sys 0m6.762s 00:04:33.286 21:01:20 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.286 21:01:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.286 ************************************ 00:04:33.286 END TEST json_config 00:04:33.286 ************************************ 00:04:33.286 21:01:20 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:33.286 21:01:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.286 21:01:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.286 21:01:20 -- common/autotest_common.sh@10 -- # set +x 00:04:33.286 ************************************ 00:04:33.286 START TEST json_config_extra_key 00:04:33.286 ************************************ 00:04:33.286 21:01:20 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:33.286 21:01:20 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:33.286 21:01:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:33.286 21:01:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:33.286 21:01:20 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:33.286 21:01:20 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.286 21:01:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:33.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.286 --rc genhtml_branch_coverage=1 00:04:33.286 --rc genhtml_function_coverage=1 00:04:33.286 --rc genhtml_legend=1 00:04:33.286 --rc geninfo_all_blocks=1 00:04:33.286 --rc geninfo_unexecuted_blocks=1 00:04:33.286 00:04:33.286 ' 00:04:33.286 21:01:20 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:33.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.286 --rc genhtml_branch_coverage=1 00:04:33.286 --rc genhtml_function_coverage=1 00:04:33.286 --rc genhtml_legend=1 00:04:33.286 --rc geninfo_all_blocks=1 00:04:33.286 --rc geninfo_unexecuted_blocks=1 00:04:33.286 00:04:33.286 ' 00:04:33.286 21:01:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:33.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.286 --rc genhtml_branch_coverage=1 00:04:33.286 --rc genhtml_function_coverage=1 00:04:33.286 --rc genhtml_legend=1 00:04:33.286 --rc geninfo_all_blocks=1 00:04:33.286 --rc geninfo_unexecuted_blocks=1 00:04:33.286 00:04:33.286 ' 00:04:33.286 21:01:20 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:33.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.286 --rc genhtml_branch_coverage=1 00:04:33.286 --rc genhtml_function_coverage=1 00:04:33.286 --rc genhtml_legend=1 00:04:33.286 --rc geninfo_all_blocks=1 00:04:33.286 --rc geninfo_unexecuted_blocks=1 00:04:33.286 00:04:33.286 ' 00:04:33.286 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.286 21:01:20 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.286 21:01:20 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.286 21:01:20 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.286 21:01:20 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.286 21:01:20 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:33.286 21:01:20 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.286 21:01:20 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.287 21:01:20 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.287 21:01:20 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:33.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:33.287 21:01:20 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:33.287 21:01:20 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:33.287 21:01:20 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:33.287 INFO: launching applications... 00:04:33.287 21:01:20 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:33.287 21:01:20 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:33.287 21:01:20 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:33.287 21:01:20 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.287 21:01:20 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.287 21:01:20 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.287 21:01:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.287 21:01:20 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.287 21:01:20 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3710237 00:04:33.287 21:01:20 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.287 Waiting for target to run... 00:04:33.287 21:01:20 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3710237 /var/tmp/spdk_tgt.sock 00:04:33.287 21:01:20 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:04:33.287 21:01:20 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3710237 ']' 00:04:33.287 21:01:20 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.287 21:01:20 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.287 21:01:20 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.287 21:01:20 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.287 21:01:20 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.287 [2024-11-26 21:01:20.602812] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:33.287 [2024-11-26 21:01:20.602861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710237 ] 00:04:33.855 [2024-11-26 21:01:21.026915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.855 [2024-11-26 21:01:21.083356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.114 21:01:21 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.114 21:01:21 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:34.114 21:01:21 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:34.114 00:04:34.114 21:01:21 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:34.114 INFO: shutting down applications... 00:04:34.114 21:01:21 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:34.114 21:01:21 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:34.114 21:01:21 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:34.114 21:01:21 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3710237 ]] 00:04:34.114 21:01:21 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3710237 00:04:34.114 21:01:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:34.114 21:01:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.114 21:01:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3710237 00:04:34.114 21:01:21 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:34.682 21:01:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:34.682 21:01:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:34.682 21:01:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3710237 00:04:34.682 21:01:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:34.682 21:01:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:34.682 21:01:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:34.682 21:01:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:34.682 SPDK target shutdown done 00:04:34.682 21:01:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:34.682 Success 00:04:34.682 00:04:34.682 real 0m1.537s 00:04:34.682 user 0m1.114s 00:04:34.682 sys 0m0.561s 00:04:34.682 21:01:21 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.682 21:01:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.682 ************************************ 00:04:34.682 END TEST json_config_extra_key 00:04:34.682 ************************************ 00:04:34.682 21:01:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.682 21:01:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.682 21:01:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.682 21:01:21 -- common/autotest_common.sh@10 -- # set +x 00:04:34.682 ************************************ 00:04:34.682 START TEST alias_rpc 00:04:34.682 ************************************ 00:04:34.682 21:01:21 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:34.683 * Looking for test storage... 00:04:34.683 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:04:34.683 21:01:22 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:34.683 21:01:22 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:34.683 21:01:22 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:34.942 21:01:22 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.942 21:01:22 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.942 21:01:22 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.942 21:01:22 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:34.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.942 --rc genhtml_branch_coverage=1 00:04:34.942 --rc genhtml_function_coverage=1 00:04:34.942 --rc genhtml_legend=1 00:04:34.942 --rc geninfo_all_blocks=1 00:04:34.942 --rc geninfo_unexecuted_blocks=1 00:04:34.942 00:04:34.942 ' 00:04:34.942 21:01:22 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:34.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.942 --rc genhtml_branch_coverage=1 00:04:34.942 --rc genhtml_function_coverage=1 00:04:34.942 --rc genhtml_legend=1 00:04:34.942 --rc geninfo_all_blocks=1 00:04:34.942 --rc geninfo_unexecuted_blocks=1 00:04:34.942 00:04:34.942 ' 00:04:34.942 21:01:22 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:34.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.942 --rc genhtml_branch_coverage=1 00:04:34.942 --rc genhtml_function_coverage=1 00:04:34.942 --rc genhtml_legend=1 00:04:34.942 --rc geninfo_all_blocks=1 00:04:34.942 --rc geninfo_unexecuted_blocks=1 00:04:34.942 00:04:34.942 ' 00:04:34.942 21:01:22 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:34.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.942 --rc genhtml_branch_coverage=1 00:04:34.942 --rc genhtml_function_coverage=1 00:04:34.942 --rc genhtml_legend=1 00:04:34.942 --rc geninfo_all_blocks=1 00:04:34.942 --rc geninfo_unexecuted_blocks=1 00:04:34.942 00:04:34.942 ' 00:04:34.943 21:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:34.943 21:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3710605 00:04:34.943 21:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3710605 00:04:34.943 21:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.943 21:01:22 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3710605 ']' 00:04:34.943 21:01:22 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.943 21:01:22 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.943 21:01:22 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.943 21:01:22 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.943 21:01:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.943 [2024-11-26 21:01:22.190260] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:34.943 [2024-11-26 21:01:22.190311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710605 ] 00:04:34.943 [2024-11-26 21:01:22.249034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.943 [2024-11-26 21:01:22.286104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.202 21:01:22 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.202 21:01:22 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:35.202 21:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:35.461 21:01:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3710605 00:04:35.461 21:01:22 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3710605 ']' 00:04:35.461 21:01:22 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3710605 00:04:35.461 21:01:22 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:35.461 21:01:22 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.461 21:01:22 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3710605 00:04:35.461 21:01:22 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.461 21:01:22 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.461 21:01:22 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3710605' 00:04:35.461 killing process with pid 3710605 00:04:35.461 21:01:22 alias_rpc -- common/autotest_common.sh@973 -- # kill 3710605 00:04:35.461 21:01:22 alias_rpc -- common/autotest_common.sh@978 -- # wait 3710605 00:04:35.721 00:04:35.721 real 0m1.056s 00:04:35.721 user 0m1.066s 00:04:35.721 sys 0m0.382s 00:04:35.721 21:01:23 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.721 21:01:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.721 ************************************ 00:04:35.721 END TEST alias_rpc 00:04:35.721 ************************************ 00:04:35.721 21:01:23 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:35.721 21:01:23 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:35.721 21:01:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.721 21:01:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.721 21:01:23 -- common/autotest_common.sh@10 -- # set +x 00:04:35.721 ************************************ 00:04:35.721 START TEST spdkcli_tcp 00:04:35.721 ************************************ 00:04:35.721 21:01:23 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:35.980 * Looking for test storage... 00:04:35.980 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:04:35.980 21:01:23 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.980 21:01:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:35.980 21:01:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.980 21:01:23 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:35.980 21:01:23 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.981 21:01:23 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.981 21:01:23 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.981 21:01:23 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:35.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.981 --rc genhtml_branch_coverage=1 00:04:35.981 --rc genhtml_function_coverage=1 00:04:35.981 --rc genhtml_legend=1 00:04:35.981 --rc geninfo_all_blocks=1 00:04:35.981 --rc geninfo_unexecuted_blocks=1 00:04:35.981 00:04:35.981 ' 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:35.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.981 --rc genhtml_branch_coverage=1 00:04:35.981 --rc genhtml_function_coverage=1 00:04:35.981 --rc genhtml_legend=1 00:04:35.981 --rc geninfo_all_blocks=1 00:04:35.981 --rc geninfo_unexecuted_blocks=1 00:04:35.981 00:04:35.981 ' 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:35.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.981 --rc genhtml_branch_coverage=1 00:04:35.981 --rc genhtml_function_coverage=1 00:04:35.981 --rc genhtml_legend=1 00:04:35.981 --rc geninfo_all_blocks=1 00:04:35.981 --rc geninfo_unexecuted_blocks=1 00:04:35.981 00:04:35.981 ' 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:35.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.981 --rc genhtml_branch_coverage=1 00:04:35.981 --rc genhtml_function_coverage=1 00:04:35.981 --rc genhtml_legend=1 00:04:35.981 --rc geninfo_all_blocks=1 00:04:35.981 --rc geninfo_unexecuted_blocks=1 00:04:35.981 00:04:35.981 ' 00:04:35.981 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:04:35.981 21:01:23 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:35.981 21:01:23 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:04:35.981 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:35.981 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:35.981 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:35.981 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.981 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3710795 00:04:35.981 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3710795 00:04:35.981 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3710795 ']' 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.981 21:01:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.981 [2024-11-26 21:01:23.314987] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:35.981 [2024-11-26 21:01:23.315034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3710795 ] 00:04:35.981 [2024-11-26 21:01:23.372970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.981 [2024-11-26 21:01:23.413338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.981 [2024-11-26 21:01:23.413340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.239 21:01:23 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.239 21:01:23 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:36.239 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3710938 00:04:36.239 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:36.239 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:36.499 [ 00:04:36.499 "bdev_malloc_delete", 00:04:36.499 "bdev_malloc_create", 00:04:36.499 "bdev_null_resize", 00:04:36.499 "bdev_null_delete", 00:04:36.500 "bdev_null_create", 00:04:36.500 "bdev_nvme_cuse_unregister", 00:04:36.500 "bdev_nvme_cuse_register", 00:04:36.500 "bdev_opal_new_user", 00:04:36.500 "bdev_opal_set_lock_state", 00:04:36.500 "bdev_opal_delete", 00:04:36.500 "bdev_opal_get_info", 00:04:36.500 "bdev_opal_create", 00:04:36.500 "bdev_nvme_opal_revert", 00:04:36.500 "bdev_nvme_opal_init", 00:04:36.500 "bdev_nvme_send_cmd", 00:04:36.500 "bdev_nvme_set_keys", 00:04:36.500 "bdev_nvme_get_path_iostat", 00:04:36.500 "bdev_nvme_get_mdns_discovery_info", 00:04:36.500 "bdev_nvme_stop_mdns_discovery", 00:04:36.500 "bdev_nvme_start_mdns_discovery", 00:04:36.500 "bdev_nvme_set_multipath_policy", 00:04:36.500 "bdev_nvme_set_preferred_path", 00:04:36.500 "bdev_nvme_get_io_paths", 00:04:36.500 "bdev_nvme_remove_error_injection", 00:04:36.500 "bdev_nvme_add_error_injection", 00:04:36.500 "bdev_nvme_get_discovery_info", 00:04:36.500 "bdev_nvme_stop_discovery", 00:04:36.500 "bdev_nvme_start_discovery", 00:04:36.500 "bdev_nvme_get_controller_health_info", 00:04:36.500 "bdev_nvme_disable_controller", 00:04:36.500 "bdev_nvme_enable_controller", 00:04:36.500 "bdev_nvme_reset_controller", 00:04:36.500 "bdev_nvme_get_transport_statistics", 00:04:36.500 "bdev_nvme_apply_firmware", 00:04:36.500 "bdev_nvme_detach_controller", 00:04:36.500 "bdev_nvme_get_controllers", 00:04:36.500 "bdev_nvme_attach_controller", 00:04:36.500 "bdev_nvme_set_hotplug", 00:04:36.500 "bdev_nvme_set_options", 00:04:36.500 "bdev_passthru_delete", 00:04:36.500 "bdev_passthru_create", 00:04:36.500 "bdev_lvol_set_parent_bdev", 00:04:36.500 "bdev_lvol_set_parent", 00:04:36.500 "bdev_lvol_check_shallow_copy", 00:04:36.500 "bdev_lvol_start_shallow_copy", 00:04:36.500 "bdev_lvol_grow_lvstore", 00:04:36.500 "bdev_lvol_get_lvols", 00:04:36.500 "bdev_lvol_get_lvstores", 00:04:36.500 "bdev_lvol_delete", 00:04:36.500 "bdev_lvol_set_read_only", 00:04:36.500 "bdev_lvol_resize", 00:04:36.500 "bdev_lvol_decouple_parent", 00:04:36.500 "bdev_lvol_inflate", 00:04:36.500 "bdev_lvol_rename", 00:04:36.500 "bdev_lvol_clone_bdev", 00:04:36.500 "bdev_lvol_clone", 00:04:36.500 "bdev_lvol_snapshot", 00:04:36.500 "bdev_lvol_create", 00:04:36.500 "bdev_lvol_delete_lvstore", 00:04:36.500 "bdev_lvol_rename_lvstore", 00:04:36.500 "bdev_lvol_create_lvstore", 00:04:36.500 "bdev_raid_set_options", 00:04:36.500 "bdev_raid_remove_base_bdev", 00:04:36.500 "bdev_raid_add_base_bdev", 00:04:36.500 "bdev_raid_delete", 00:04:36.500 "bdev_raid_create", 00:04:36.500 "bdev_raid_get_bdevs", 00:04:36.500 "bdev_error_inject_error", 00:04:36.500 "bdev_error_delete", 00:04:36.500 "bdev_error_create", 00:04:36.500 "bdev_split_delete", 00:04:36.500 "bdev_split_create", 00:04:36.500 "bdev_delay_delete", 00:04:36.500 "bdev_delay_create", 00:04:36.500 "bdev_delay_update_latency", 00:04:36.500 "bdev_zone_block_delete", 00:04:36.500 "bdev_zone_block_create", 00:04:36.500 "blobfs_create", 00:04:36.500 "blobfs_detect", 00:04:36.500 "blobfs_set_cache_size", 00:04:36.500 "bdev_aio_delete", 00:04:36.500 "bdev_aio_rescan", 00:04:36.500 "bdev_aio_create", 00:04:36.500 "bdev_ftl_set_property", 00:04:36.500 "bdev_ftl_get_properties", 00:04:36.500 "bdev_ftl_get_stats", 00:04:36.500 "bdev_ftl_unmap", 00:04:36.500 "bdev_ftl_unload", 00:04:36.500 "bdev_ftl_delete", 00:04:36.500 "bdev_ftl_load", 00:04:36.500 "bdev_ftl_create", 00:04:36.500 "bdev_virtio_attach_controller", 00:04:36.500 "bdev_virtio_scsi_get_devices", 00:04:36.500 "bdev_virtio_detach_controller", 00:04:36.500 "bdev_virtio_blk_set_hotplug", 00:04:36.500 "bdev_iscsi_delete", 00:04:36.500 "bdev_iscsi_create", 00:04:36.500 "bdev_iscsi_set_options", 00:04:36.500 "accel_error_inject_error", 00:04:36.500 "ioat_scan_accel_module", 00:04:36.500 "dsa_scan_accel_module", 00:04:36.500 "iaa_scan_accel_module", 00:04:36.500 "keyring_file_remove_key", 00:04:36.500 "keyring_file_add_key", 00:04:36.500 "keyring_linux_set_options", 00:04:36.500 "fsdev_aio_delete", 00:04:36.500 "fsdev_aio_create", 00:04:36.500 "iscsi_get_histogram", 00:04:36.500 "iscsi_enable_histogram", 00:04:36.500 "iscsi_set_options", 00:04:36.500 "iscsi_get_auth_groups", 00:04:36.500 "iscsi_auth_group_remove_secret", 00:04:36.500 "iscsi_auth_group_add_secret", 00:04:36.500 "iscsi_delete_auth_group", 00:04:36.500 "iscsi_create_auth_group", 00:04:36.500 "iscsi_set_discovery_auth", 00:04:36.500 "iscsi_get_options", 00:04:36.500 "iscsi_target_node_request_logout", 00:04:36.500 "iscsi_target_node_set_redirect", 00:04:36.500 "iscsi_target_node_set_auth", 00:04:36.500 "iscsi_target_node_add_lun", 00:04:36.500 "iscsi_get_stats", 00:04:36.500 "iscsi_get_connections", 00:04:36.500 "iscsi_portal_group_set_auth", 00:04:36.500 "iscsi_start_portal_group", 00:04:36.500 "iscsi_delete_portal_group", 00:04:36.500 "iscsi_create_portal_group", 00:04:36.500 "iscsi_get_portal_groups", 00:04:36.500 "iscsi_delete_target_node", 00:04:36.500 "iscsi_target_node_remove_pg_ig_maps", 00:04:36.500 "iscsi_target_node_add_pg_ig_maps", 00:04:36.500 "iscsi_create_target_node", 00:04:36.500 "iscsi_get_target_nodes", 00:04:36.500 "iscsi_delete_initiator_group", 00:04:36.500 "iscsi_initiator_group_remove_initiators", 00:04:36.500 "iscsi_initiator_group_add_initiators", 00:04:36.500 "iscsi_create_initiator_group", 00:04:36.500 "iscsi_get_initiator_groups", 00:04:36.500 "nvmf_set_crdt", 00:04:36.500 "nvmf_set_config", 00:04:36.500 "nvmf_set_max_subsystems", 00:04:36.500 "nvmf_stop_mdns_prr", 00:04:36.500 "nvmf_publish_mdns_prr", 00:04:36.500 "nvmf_subsystem_get_listeners", 00:04:36.500 "nvmf_subsystem_get_qpairs", 00:04:36.500 "nvmf_subsystem_get_controllers", 00:04:36.500 "nvmf_get_stats", 00:04:36.500 "nvmf_get_transports", 00:04:36.500 "nvmf_create_transport", 00:04:36.500 "nvmf_get_targets", 00:04:36.500 "nvmf_delete_target", 00:04:36.500 "nvmf_create_target", 00:04:36.500 "nvmf_subsystem_allow_any_host", 00:04:36.500 "nvmf_subsystem_set_keys", 00:04:36.500 "nvmf_subsystem_remove_host", 00:04:36.500 "nvmf_subsystem_add_host", 00:04:36.500 "nvmf_ns_remove_host", 00:04:36.500 "nvmf_ns_add_host", 00:04:36.500 "nvmf_subsystem_remove_ns", 00:04:36.500 "nvmf_subsystem_set_ns_ana_group", 00:04:36.500 "nvmf_subsystem_add_ns", 00:04:36.500 "nvmf_subsystem_listener_set_ana_state", 00:04:36.500 "nvmf_discovery_get_referrals", 00:04:36.500 "nvmf_discovery_remove_referral", 00:04:36.500 "nvmf_discovery_add_referral", 00:04:36.500 "nvmf_subsystem_remove_listener", 00:04:36.500 "nvmf_subsystem_add_listener", 00:04:36.500 "nvmf_delete_subsystem", 00:04:36.500 "nvmf_create_subsystem", 00:04:36.500 "nvmf_get_subsystems", 00:04:36.500 "env_dpdk_get_mem_stats", 00:04:36.500 "nbd_get_disks", 00:04:36.500 "nbd_stop_disk", 00:04:36.500 "nbd_start_disk", 00:04:36.500 "ublk_recover_disk", 00:04:36.500 "ublk_get_disks", 00:04:36.500 "ublk_stop_disk", 00:04:36.501 "ublk_start_disk", 00:04:36.501 "ublk_destroy_target", 00:04:36.501 "ublk_create_target", 00:04:36.501 "virtio_blk_create_transport", 00:04:36.501 "virtio_blk_get_transports", 00:04:36.501 "vhost_controller_set_coalescing", 00:04:36.501 "vhost_get_controllers", 00:04:36.501 "vhost_delete_controller", 00:04:36.501 "vhost_create_blk_controller", 00:04:36.501 "vhost_scsi_controller_remove_target", 00:04:36.501 "vhost_scsi_controller_add_target", 00:04:36.501 "vhost_start_scsi_controller", 00:04:36.501 "vhost_create_scsi_controller", 00:04:36.501 "thread_set_cpumask", 00:04:36.501 "scheduler_set_options", 00:04:36.501 "framework_get_governor", 00:04:36.501 "framework_get_scheduler", 00:04:36.501 "framework_set_scheduler", 00:04:36.501 "framework_get_reactors", 00:04:36.501 "thread_get_io_channels", 00:04:36.501 "thread_get_pollers", 00:04:36.501 "thread_get_stats", 00:04:36.501 "framework_monitor_context_switch", 00:04:36.501 "spdk_kill_instance", 00:04:36.501 "log_enable_timestamps", 00:04:36.501 "log_get_flags", 00:04:36.501 "log_clear_flag", 00:04:36.501 "log_set_flag", 00:04:36.501 "log_get_level", 00:04:36.501 "log_set_level", 00:04:36.501 "log_get_print_level", 00:04:36.501 "log_set_print_level", 00:04:36.501 "framework_enable_cpumask_locks", 00:04:36.501 "framework_disable_cpumask_locks", 00:04:36.501 "framework_wait_init", 00:04:36.501 "framework_start_init", 00:04:36.501 "scsi_get_devices", 00:04:36.501 "bdev_get_histogram", 00:04:36.501 "bdev_enable_histogram", 00:04:36.501 "bdev_set_qos_limit", 00:04:36.501 "bdev_set_qd_sampling_period", 00:04:36.501 "bdev_get_bdevs", 00:04:36.501 "bdev_reset_iostat", 00:04:36.501 "bdev_get_iostat", 00:04:36.501 "bdev_examine", 00:04:36.501 "bdev_wait_for_examine", 00:04:36.501 "bdev_set_options", 00:04:36.501 "accel_get_stats", 00:04:36.501 "accel_set_options", 00:04:36.501 "accel_set_driver", 00:04:36.501 "accel_crypto_key_destroy", 00:04:36.501 "accel_crypto_keys_get", 00:04:36.501 "accel_crypto_key_create", 00:04:36.501 "accel_assign_opc", 00:04:36.501 "accel_get_module_info", 00:04:36.501 "accel_get_opc_assignments", 00:04:36.501 "vmd_rescan", 00:04:36.501 "vmd_remove_device", 00:04:36.501 "vmd_enable", 00:04:36.501 "sock_get_default_impl", 00:04:36.501 "sock_set_default_impl", 00:04:36.501 "sock_impl_set_options", 00:04:36.501 "sock_impl_get_options", 00:04:36.501 "iobuf_get_stats", 00:04:36.501 "iobuf_set_options", 00:04:36.501 "keyring_get_keys", 00:04:36.501 "framework_get_pci_devices", 00:04:36.501 "framework_get_config", 00:04:36.501 "framework_get_subsystems", 00:04:36.501 "fsdev_set_opts", 00:04:36.501 "fsdev_get_opts", 00:04:36.501 "trace_get_info", 00:04:36.501 "trace_get_tpoint_group_mask", 00:04:36.501 "trace_disable_tpoint_group", 00:04:36.501 "trace_enable_tpoint_group", 00:04:36.501 "trace_clear_tpoint_mask", 00:04:36.501 "trace_set_tpoint_mask", 00:04:36.501 "notify_get_notifications", 00:04:36.501 "notify_get_types", 00:04:36.501 "spdk_get_version", 00:04:36.501 "rpc_get_methods" 00:04:36.501 ] 00:04:36.501 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.501 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:36.501 21:01:23 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3710795 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3710795 ']' 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3710795 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3710795 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3710795' 00:04:36.501 killing process with pid 3710795 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3710795 00:04:36.501 21:01:23 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3710795 00:04:36.760 00:04:36.760 real 0m1.077s 00:04:36.760 user 0m1.809s 00:04:36.760 sys 0m0.418s 00:04:36.760 21:01:24 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.760 21:01:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.760 ************************************ 00:04:36.760 END TEST spdkcli_tcp 00:04:36.760 ************************************ 00:04:37.020 21:01:24 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:37.020 21:01:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.020 21:01:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.020 21:01:24 -- common/autotest_common.sh@10 -- # set +x 00:04:37.020 ************************************ 00:04:37.020 START TEST dpdk_mem_utility 00:04:37.020 ************************************ 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:37.020 * Looking for test storage... 00:04:37.020 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.020 21:01:24 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.020 --rc genhtml_branch_coverage=1 00:04:37.020 --rc genhtml_function_coverage=1 00:04:37.020 --rc genhtml_legend=1 00:04:37.020 --rc geninfo_all_blocks=1 00:04:37.020 --rc geninfo_unexecuted_blocks=1 00:04:37.020 00:04:37.020 ' 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.020 --rc genhtml_branch_coverage=1 00:04:37.020 --rc genhtml_function_coverage=1 00:04:37.020 --rc genhtml_legend=1 00:04:37.020 --rc geninfo_all_blocks=1 00:04:37.020 --rc geninfo_unexecuted_blocks=1 00:04:37.020 00:04:37.020 ' 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.020 --rc genhtml_branch_coverage=1 00:04:37.020 --rc genhtml_function_coverage=1 00:04:37.020 --rc genhtml_legend=1 00:04:37.020 --rc geninfo_all_blocks=1 00:04:37.020 --rc geninfo_unexecuted_blocks=1 00:04:37.020 00:04:37.020 ' 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.020 --rc genhtml_branch_coverage=1 00:04:37.020 --rc genhtml_function_coverage=1 00:04:37.020 --rc genhtml_legend=1 00:04:37.020 --rc geninfo_all_blocks=1 00:04:37.020 --rc geninfo_unexecuted_blocks=1 00:04:37.020 00:04:37.020 ' 00:04:37.020 21:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:37.020 21:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3711017 00:04:37.020 21:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3711017 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3711017 ']' 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.020 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.021 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.021 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.021 21:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:37.021 [2024-11-26 21:01:24.443272] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:37.021 [2024-11-26 21:01:24.443316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711017 ] 00:04:37.278 [2024-11-26 21:01:24.499805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.278 [2024-11-26 21:01:24.538931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.537 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.537 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:37.537 21:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:37.537 21:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:37.537 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.537 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.537 { 00:04:37.537 "filename": "/tmp/spdk_mem_dump.txt" 00:04:37.537 } 00:04:37.537 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.538 21:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:37.538 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:37.538 1 heaps totaling size 810.000000 MiB 00:04:37.538 size: 810.000000 MiB heap id: 0 00:04:37.538 end heaps---------- 00:04:37.538 9 mempools totaling size 595.772034 MiB 00:04:37.538 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:37.538 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:37.538 size: 92.545471 MiB name: bdev_io_3711017 00:04:37.538 size: 50.003479 MiB name: msgpool_3711017 00:04:37.538 size: 36.509338 MiB name: fsdev_io_3711017 00:04:37.538 size: 21.763794 MiB name: PDU_Pool 00:04:37.538 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:37.538 size: 4.133484 MiB name: evtpool_3711017 00:04:37.538 size: 0.026123 MiB name: Session_Pool 00:04:37.538 end mempools------- 00:04:37.538 6 memzones totaling size 4.142822 MiB 00:04:37.538 size: 1.000366 MiB name: RG_ring_0_3711017 00:04:37.538 size: 1.000366 MiB name: RG_ring_1_3711017 00:04:37.538 size: 1.000366 MiB name: RG_ring_4_3711017 00:04:37.538 size: 1.000366 MiB name: RG_ring_5_3711017 00:04:37.538 size: 0.125366 MiB name: RG_ring_2_3711017 00:04:37.538 size: 0.015991 MiB name: RG_ring_3_3711017 00:04:37.538 end memzones------- 00:04:37.538 21:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:37.538 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:37.538 list of free elements. size: 10.862488 MiB 00:04:37.538 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:37.538 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:37.538 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:37.538 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:37.538 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:37.538 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:37.538 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:37.538 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:37.538 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:37.538 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:37.538 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:37.538 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:37.538 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:37.538 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:37.538 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:37.538 list of standard malloc elements. size: 199.218628 MiB 00:04:37.538 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:37.538 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:37.538 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:37.538 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:37.538 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:37.538 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:37.538 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:37.538 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:37.538 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:37.538 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:37.538 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:37.538 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:37.538 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:37.538 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:37.538 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:37.538 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:37.538 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:37.538 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:37.538 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:37.538 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:37.538 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:37.538 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:37.538 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:37.538 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:37.538 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:37.538 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:37.538 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:37.538 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:37.538 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:37.538 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:37.538 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:37.538 list of memzone associated elements. size: 599.918884 MiB 00:04:37.538 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:37.538 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:37.538 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:37.538 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:37.538 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:37.538 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3711017_0 00:04:37.538 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:37.538 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3711017_0 00:04:37.538 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:37.538 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3711017_0 00:04:37.538 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:37.538 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:37.538 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:37.538 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:37.538 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:37.538 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3711017_0 00:04:37.538 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:37.538 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3711017 00:04:37.538 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:37.538 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3711017 00:04:37.538 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:37.538 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:37.538 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:37.538 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:37.538 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:37.538 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:37.538 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:37.538 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:37.538 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:37.538 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3711017 00:04:37.538 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:37.538 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3711017 00:04:37.538 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:37.538 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3711017 00:04:37.538 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:37.538 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3711017 00:04:37.538 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:37.538 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3711017 00:04:37.538 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:37.538 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3711017 00:04:37.538 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:37.538 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:37.538 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:37.538 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:37.538 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:37.538 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:37.538 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:37.538 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3711017 00:04:37.538 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:37.538 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3711017 00:04:37.538 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:37.538 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:37.538 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:37.538 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:37.538 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:37.538 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3711017 00:04:37.538 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:37.538 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:37.538 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:37.538 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3711017 00:04:37.538 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:37.538 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3711017 00:04:37.538 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:37.538 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3711017 00:04:37.539 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:37.539 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:37.539 21:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:37.539 21:01:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3711017 00:04:37.539 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3711017 ']' 00:04:37.539 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3711017 00:04:37.539 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:37.539 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.539 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3711017 00:04:37.539 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.539 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.539 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3711017' 00:04:37.539 killing process with pid 3711017 00:04:37.539 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3711017 00:04:37.539 21:01:24 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3711017 00:04:37.798 00:04:37.798 real 0m0.948s 00:04:37.798 user 0m0.864s 00:04:37.798 sys 0m0.390s 00:04:37.798 21:01:25 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.798 21:01:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.798 ************************************ 00:04:37.798 END TEST dpdk_mem_utility 00:04:37.798 ************************************ 00:04:37.798 21:01:25 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:37.798 21:01:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.798 21:01:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.798 21:01:25 -- common/autotest_common.sh@10 -- # set +x 00:04:38.057 ************************************ 00:04:38.057 START TEST event 00:04:38.057 ************************************ 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:04:38.057 * Looking for test storage... 00:04:38.057 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:38.057 21:01:25 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.057 21:01:25 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.057 21:01:25 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.057 21:01:25 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.057 21:01:25 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.057 21:01:25 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.057 21:01:25 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.057 21:01:25 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.057 21:01:25 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.057 21:01:25 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.057 21:01:25 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.057 21:01:25 event -- scripts/common.sh@344 -- # case "$op" in 00:04:38.057 21:01:25 event -- scripts/common.sh@345 -- # : 1 00:04:38.057 21:01:25 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.057 21:01:25 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.057 21:01:25 event -- scripts/common.sh@365 -- # decimal 1 00:04:38.057 21:01:25 event -- scripts/common.sh@353 -- # local d=1 00:04:38.057 21:01:25 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.057 21:01:25 event -- scripts/common.sh@355 -- # echo 1 00:04:38.057 21:01:25 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.057 21:01:25 event -- scripts/common.sh@366 -- # decimal 2 00:04:38.057 21:01:25 event -- scripts/common.sh@353 -- # local d=2 00:04:38.057 21:01:25 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.057 21:01:25 event -- scripts/common.sh@355 -- # echo 2 00:04:38.057 21:01:25 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.057 21:01:25 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.057 21:01:25 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.057 21:01:25 event -- scripts/common.sh@368 -- # return 0 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:38.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.057 --rc genhtml_branch_coverage=1 00:04:38.057 --rc genhtml_function_coverage=1 00:04:38.057 --rc genhtml_legend=1 00:04:38.057 --rc geninfo_all_blocks=1 00:04:38.057 --rc geninfo_unexecuted_blocks=1 00:04:38.057 00:04:38.057 ' 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:38.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.057 --rc genhtml_branch_coverage=1 00:04:38.057 --rc genhtml_function_coverage=1 00:04:38.057 --rc genhtml_legend=1 00:04:38.057 --rc geninfo_all_blocks=1 00:04:38.057 --rc geninfo_unexecuted_blocks=1 00:04:38.057 00:04:38.057 ' 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:38.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.057 --rc genhtml_branch_coverage=1 00:04:38.057 --rc genhtml_function_coverage=1 00:04:38.057 --rc genhtml_legend=1 00:04:38.057 --rc geninfo_all_blocks=1 00:04:38.057 --rc geninfo_unexecuted_blocks=1 00:04:38.057 00:04:38.057 ' 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:38.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.057 --rc genhtml_branch_coverage=1 00:04:38.057 --rc genhtml_function_coverage=1 00:04:38.057 --rc genhtml_legend=1 00:04:38.057 --rc geninfo_all_blocks=1 00:04:38.057 --rc geninfo_unexecuted_blocks=1 00:04:38.057 00:04:38.057 ' 00:04:38.057 21:01:25 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:38.057 21:01:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:38.057 21:01:25 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:38.057 21:01:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.057 21:01:25 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.057 ************************************ 00:04:38.057 START TEST event_perf 00:04:38.057 ************************************ 00:04:38.057 21:01:25 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.057 Running I/O for 1 seconds...[2024-11-26 21:01:25.473204] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:38.057 [2024-11-26 21:01:25.473281] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711341 ] 00:04:38.316 [2024-11-26 21:01:25.536021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.316 [2024-11-26 21:01:25.576110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.316 [2024-11-26 21:01:25.576194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.316 [2024-11-26 21:01:25.576282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:38.316 [2024-11-26 21:01:25.576284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.251 Running I/O for 1 seconds... 00:04:39.251 lcore 0: 217421 00:04:39.251 lcore 1: 217419 00:04:39.251 lcore 2: 217421 00:04:39.251 lcore 3: 217420 00:04:39.251 done. 00:04:39.251 00:04:39.251 real 0m1.161s 00:04:39.251 user 0m4.092s 00:04:39.251 sys 0m0.066s 00:04:39.251 21:01:26 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.251 21:01:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.251 ************************************ 00:04:39.251 END TEST event_perf 00:04:39.251 ************************************ 00:04:39.251 21:01:26 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:39.251 21:01:26 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:39.251 21:01:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.251 21:01:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.251 ************************************ 00:04:39.251 START TEST event_reactor 00:04:39.251 ************************************ 00:04:39.251 21:01:26 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:39.509 [2024-11-26 21:01:26.705783] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:39.509 [2024-11-26 21:01:26.705853] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711623 ] 00:04:39.509 [2024-11-26 21:01:26.771532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.509 [2024-11-26 21:01:26.808897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.444 test_start 00:04:40.444 oneshot 00:04:40.444 tick 100 00:04:40.444 tick 100 00:04:40.444 tick 250 00:04:40.444 tick 100 00:04:40.444 tick 100 00:04:40.444 tick 100 00:04:40.444 tick 250 00:04:40.444 tick 500 00:04:40.444 tick 100 00:04:40.444 tick 100 00:04:40.444 tick 250 00:04:40.444 tick 100 00:04:40.444 tick 100 00:04:40.444 test_end 00:04:40.444 00:04:40.444 real 0m1.159s 00:04:40.444 user 0m1.087s 00:04:40.444 sys 0m0.068s 00:04:40.444 21:01:27 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.444 21:01:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:40.444 ************************************ 00:04:40.444 END TEST event_reactor 00:04:40.444 ************************************ 00:04:40.444 21:01:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:40.444 21:01:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:40.444 21:01:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.444 21:01:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.703 ************************************ 00:04:40.703 START TEST event_reactor_perf 00:04:40.703 ************************************ 00:04:40.703 21:01:27 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:40.703 [2024-11-26 21:01:27.931167] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:40.703 [2024-11-26 21:01:27.931231] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3711904 ] 00:04:40.703 [2024-11-26 21:01:27.994641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.703 [2024-11-26 21:01:28.030883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.639 test_start 00:04:41.639 test_end 00:04:41.639 Performance: 561378 events per second 00:04:41.639 00:04:41.639 real 0m1.155s 00:04:41.639 user 0m1.090s 00:04:41.639 sys 0m0.061s 00:04:41.639 21:01:29 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.639 21:01:29 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:41.639 ************************************ 00:04:41.639 END TEST event_reactor_perf 00:04:41.639 ************************************ 00:04:41.898 21:01:29 event -- event/event.sh@49 -- # uname -s 00:04:41.898 21:01:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:41.898 21:01:29 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:41.898 21:01:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.898 21:01:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.898 21:01:29 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.898 ************************************ 00:04:41.898 START TEST event_scheduler 00:04:41.898 ************************************ 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:41.899 * Looking for test storage... 00:04:41.899 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.899 21:01:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.899 --rc genhtml_branch_coverage=1 00:04:41.899 --rc genhtml_function_coverage=1 00:04:41.899 --rc genhtml_legend=1 00:04:41.899 --rc geninfo_all_blocks=1 00:04:41.899 --rc geninfo_unexecuted_blocks=1 00:04:41.899 00:04:41.899 ' 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.899 --rc genhtml_branch_coverage=1 00:04:41.899 --rc genhtml_function_coverage=1 00:04:41.899 --rc genhtml_legend=1 00:04:41.899 --rc geninfo_all_blocks=1 00:04:41.899 --rc geninfo_unexecuted_blocks=1 00:04:41.899 00:04:41.899 ' 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.899 --rc genhtml_branch_coverage=1 00:04:41.899 --rc genhtml_function_coverage=1 00:04:41.899 --rc genhtml_legend=1 00:04:41.899 --rc geninfo_all_blocks=1 00:04:41.899 --rc geninfo_unexecuted_blocks=1 00:04:41.899 00:04:41.899 ' 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.899 --rc genhtml_branch_coverage=1 00:04:41.899 --rc genhtml_function_coverage=1 00:04:41.899 --rc genhtml_legend=1 00:04:41.899 --rc geninfo_all_blocks=1 00:04:41.899 --rc geninfo_unexecuted_blocks=1 00:04:41.899 00:04:41.899 ' 00:04:41.899 21:01:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:41.899 21:01:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:41.899 21:01:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3712223 00:04:41.899 21:01:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.899 21:01:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3712223 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3712223 ']' 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.899 21:01:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.899 [2024-11-26 21:01:29.330306] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:41.899 [2024-11-26 21:01:29.330345] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712223 ] 00:04:42.158 [2024-11-26 21:01:29.385434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:42.158 [2024-11-26 21:01:29.428761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.158 [2024-11-26 21:01:29.428849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.158 [2024-11-26 21:01:29.428938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:42.158 [2024-11-26 21:01:29.428939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.158 21:01:29 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.158 21:01:29 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:42.158 21:01:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:42.158 21:01:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.158 21:01:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.158 [2024-11-26 21:01:29.485454] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:42.158 [2024-11-26 21:01:29.485470] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:42.158 [2024-11-26 21:01:29.485477] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:42.158 [2024-11-26 21:01:29.485483] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:42.158 [2024-11-26 21:01:29.485487] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:42.158 21:01:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.158 21:01:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:42.158 21:01:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.158 21:01:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.158 [2024-11-26 21:01:29.557825] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:42.158 21:01:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.158 21:01:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:42.158 21:01:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.158 21:01:29 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.158 21:01:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 ************************************ 00:04:42.418 START TEST scheduler_create_thread 00:04:42.418 ************************************ 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 2 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 3 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 4 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 5 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 6 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 7 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 8 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 9 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 10 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.418 21:01:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.795 21:01:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.795 21:01:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:43.795 21:01:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:43.795 21:01:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.795 21:01:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.174 21:01:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.174 00:04:45.174 real 0m2.614s 00:04:45.174 user 0m0.015s 00:04:45.174 sys 0m0.009s 00:04:45.174 21:01:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.174 21:01:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.174 ************************************ 00:04:45.174 END TEST scheduler_create_thread 00:04:45.174 ************************************ 00:04:45.174 21:01:32 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:45.174 21:01:32 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3712223 00:04:45.174 21:01:32 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3712223 ']' 00:04:45.174 21:01:32 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3712223 00:04:45.174 21:01:32 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:45.174 21:01:32 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.174 21:01:32 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3712223 00:04:45.174 21:01:32 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:45.174 21:01:32 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:45.174 21:01:32 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3712223' 00:04:45.174 killing process with pid 3712223 00:04:45.174 21:01:32 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3712223 00:04:45.174 21:01:32 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3712223 00:04:45.433 [2024-11-26 21:01:32.687861] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:45.433 00:04:45.433 real 0m3.707s 00:04:45.433 user 0m5.585s 00:04:45.433 sys 0m0.336s 00:04:45.433 21:01:32 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.433 21:01:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.433 ************************************ 00:04:45.433 END TEST event_scheduler 00:04:45.433 ************************************ 00:04:45.693 21:01:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:45.693 21:01:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:45.693 21:01:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.693 21:01:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.693 21:01:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.693 ************************************ 00:04:45.693 START TEST app_repeat 00:04:45.693 ************************************ 00:04:45.693 21:01:32 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3712812 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3712812' 00:04:45.693 Process app_repeat pid: 3712812 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:45.693 spdk_app_start Round 0 00:04:45.693 21:01:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3712812 /var/tmp/spdk-nbd.sock 00:04:45.693 21:01:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3712812 ']' 00:04:45.693 21:01:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.693 21:01:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.693 21:01:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.693 21:01:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.693 21:01:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.693 [2024-11-26 21:01:32.956623] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:04:45.693 [2024-11-26 21:01:32.956690] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3712812 ] 00:04:45.693 [2024-11-26 21:01:33.016343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.693 [2024-11-26 21:01:33.054325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.693 [2024-11-26 21:01:33.054330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.953 21:01:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.953 21:01:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:45.953 21:01:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.953 Malloc0 00:04:45.953 21:01:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:46.212 Malloc1 00:04:46.212 21:01:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.212 21:01:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:46.471 /dev/nbd0 00:04:46.471 21:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:46.471 21:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.471 1+0 records in 00:04:46.471 1+0 records out 00:04:46.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191587 s, 21.4 MB/s 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:46.471 21:01:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:46.471 21:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.471 21:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.471 21:01:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.471 /dev/nbd1 00:04:46.731 21:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.731 21:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.731 1+0 records in 00:04:46.731 1+0 records out 00:04:46.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187686 s, 21.8 MB/s 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:46.731 21:01:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:46.731 21:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.731 21:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.731 21:01:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.731 21:01:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.731 21:01:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.731 21:01:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.731 { 00:04:46.731 "nbd_device": "/dev/nbd0", 00:04:46.731 "bdev_name": "Malloc0" 00:04:46.731 }, 00:04:46.731 { 00:04:46.731 "nbd_device": "/dev/nbd1", 00:04:46.731 "bdev_name": "Malloc1" 00:04:46.731 } 00:04:46.731 ]' 00:04:46.731 21:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.731 { 00:04:46.731 "nbd_device": "/dev/nbd0", 00:04:46.731 "bdev_name": "Malloc0" 00:04:46.731 }, 00:04:46.731 { 00:04:46.731 "nbd_device": "/dev/nbd1", 00:04:46.731 "bdev_name": "Malloc1" 00:04:46.731 } 00:04:46.731 ]' 00:04:46.731 21:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.731 21:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.731 /dev/nbd1' 00:04:46.731 21:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.731 /dev/nbd1' 00:04:46.731 21:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.991 256+0 records in 00:04:46.991 256+0 records out 00:04:46.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106722 s, 98.3 MB/s 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.991 256+0 records in 00:04:46.991 256+0 records out 00:04:46.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126003 s, 83.2 MB/s 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.991 256+0 records in 00:04:46.991 256+0 records out 00:04:46.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134263 s, 78.1 MB/s 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.991 21:01:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.251 21:01:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.510 21:01:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.510 21:01:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.769 21:01:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:47.769 [2024-11-26 21:01:35.193629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.028 [2024-11-26 21:01:35.227915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.028 [2024-11-26 21:01:35.227918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.028 [2024-11-26 21:01:35.266993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:48.028 [2024-11-26 21:01:35.267030] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.756 21:01:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.756 21:01:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:50.757 spdk_app_start Round 1 00:04:50.757 21:01:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3712812 /var/tmp/spdk-nbd.sock 00:04:50.757 21:01:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3712812 ']' 00:04:50.757 21:01:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.757 21:01:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.757 21:01:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.757 21:01:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.757 21:01:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.016 21:01:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.016 21:01:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:51.016 21:01:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.016 Malloc0 00:04:51.016 21:01:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.274 Malloc1 00:04:51.274 21:01:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.274 21:01:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.275 21:01:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.534 /dev/nbd0 00:04:51.534 21:01:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.534 21:01:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.534 1+0 records in 00:04:51.534 1+0 records out 00:04:51.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020323 s, 20.2 MB/s 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.534 21:01:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.534 21:01:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.534 21:01:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.534 21:01:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.793 /dev/nbd1 00:04:51.793 21:01:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.793 21:01:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.793 1+0 records in 00:04:51.793 1+0 records out 00:04:51.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019381 s, 21.1 MB/s 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.793 21:01:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.793 21:01:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.793 21:01:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.793 21:01:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.793 21:01:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.793 21:01:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.793 21:01:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.793 { 00:04:51.793 "nbd_device": "/dev/nbd0", 00:04:51.793 "bdev_name": "Malloc0" 00:04:51.793 }, 00:04:51.793 { 00:04:51.793 "nbd_device": "/dev/nbd1", 00:04:51.793 "bdev_name": "Malloc1" 00:04:51.793 } 00:04:51.793 ]' 00:04:51.793 21:01:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.793 { 00:04:51.793 "nbd_device": "/dev/nbd0", 00:04:51.793 "bdev_name": "Malloc0" 00:04:51.793 }, 00:04:51.793 { 00:04:51.793 "nbd_device": "/dev/nbd1", 00:04:51.793 "bdev_name": "Malloc1" 00:04:51.793 } 00:04:51.793 ]' 00:04:51.793 21:01:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.052 /dev/nbd1' 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.052 /dev/nbd1' 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.052 256+0 records in 00:04:52.052 256+0 records out 00:04:52.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106055 s, 98.9 MB/s 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:52.052 256+0 records in 00:04:52.052 256+0 records out 00:04:52.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126808 s, 82.7 MB/s 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:52.052 256+0 records in 00:04:52.052 256+0 records out 00:04:52.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135082 s, 77.6 MB/s 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.052 21:01:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.312 21:01:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.571 21:01:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.571 21:01:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.831 21:01:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.091 [2024-11-26 21:01:40.291417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.091 [2024-11-26 21:01:40.326307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.091 [2024-11-26 21:01:40.326309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.091 [2024-11-26 21:01:40.367460] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.091 [2024-11-26 21:01:40.367500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.380 21:01:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.380 21:01:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:56.380 spdk_app_start Round 2 00:04:56.380 21:01:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3712812 /var/tmp/spdk-nbd.sock 00:04:56.380 21:01:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3712812 ']' 00:04:56.380 21:01:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.380 21:01:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.380 21:01:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.380 21:01:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.380 21:01:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.380 21:01:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.380 21:01:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:56.380 21:01:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.380 Malloc0 00:04:56.380 21:01:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.380 Malloc1 00:04:56.380 21:01:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.380 21:01:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:56.639 /dev/nbd0 00:04:56.639 21:01:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:56.639 21:01:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.639 1+0 records in 00:04:56.639 1+0 records out 00:04:56.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000103631 s, 39.5 MB/s 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:56.639 21:01:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:56.639 21:01:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.639 21:01:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.639 21:01:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:56.898 /dev/nbd1 00:04:56.898 21:01:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:56.898 21:01:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:56.898 21:01:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:56.899 1+0 records in 00:04:56.899 1+0 records out 00:04:56.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195445 s, 21.0 MB/s 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:56.899 21:01:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:56.899 21:01:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:56.899 21:01:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.899 21:01:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:56.899 21:01:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.899 21:01:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:56.899 21:01:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:56.899 { 00:04:56.899 "nbd_device": "/dev/nbd0", 00:04:56.899 "bdev_name": "Malloc0" 00:04:56.899 }, 00:04:56.899 { 00:04:56.899 "nbd_device": "/dev/nbd1", 00:04:56.899 "bdev_name": "Malloc1" 00:04:56.899 } 00:04:56.899 ]' 00:04:56.899 21:01:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:56.899 { 00:04:56.899 "nbd_device": "/dev/nbd0", 00:04:56.899 "bdev_name": "Malloc0" 00:04:56.899 }, 00:04:56.899 { 00:04:56.899 "nbd_device": "/dev/nbd1", 00:04:56.899 "bdev_name": "Malloc1" 00:04:56.899 } 00:04:56.899 ]' 00:04:56.899 21:01:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:56.899 21:01:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:56.899 /dev/nbd1' 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.159 /dev/nbd1' 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.159 256+0 records in 00:04:57.159 256+0 records out 00:04:57.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102948 s, 102 MB/s 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.159 256+0 records in 00:04:57.159 256+0 records out 00:04:57.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124386 s, 84.3 MB/s 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.159 256+0 records in 00:04:57.159 256+0 records out 00:04:57.159 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144987 s, 72.3 MB/s 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.159 21:01:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.418 21:01:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.677 21:01:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:57.677 21:01:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:57.677 21:01:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.677 21:01:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:57.677 21:01:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:57.677 21:01:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.677 21:01:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:57.677 21:01:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:57.677 21:01:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:57.677 21:01:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:57.677 21:01:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:57.677 21:01:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:57.677 21:01:45 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:57.937 21:01:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:57.937 [2024-11-26 21:01:45.356247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:58.196 [2024-11-26 21:01:45.390888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.196 [2024-11-26 21:01:45.390891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.196 [2024-11-26 21:01:45.430723] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:58.196 [2024-11-26 21:01:45.430763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:01.486 21:01:48 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3712812 /var/tmp/spdk-nbd.sock 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3712812 ']' 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:01.486 21:01:48 event.app_repeat -- event/event.sh@39 -- # killprocess 3712812 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3712812 ']' 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3712812 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3712812 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3712812' 00:05:01.486 killing process with pid 3712812 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3712812 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3712812 00:05:01.486 spdk_app_start is called in Round 0. 00:05:01.486 Shutdown signal received, stop current app iteration 00:05:01.486 Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 reinitialization... 00:05:01.486 spdk_app_start is called in Round 1. 00:05:01.486 Shutdown signal received, stop current app iteration 00:05:01.486 Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 reinitialization... 00:05:01.486 spdk_app_start is called in Round 2. 00:05:01.486 Shutdown signal received, stop current app iteration 00:05:01.486 Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 reinitialization... 00:05:01.486 spdk_app_start is called in Round 3. 00:05:01.486 Shutdown signal received, stop current app iteration 00:05:01.486 21:01:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:01.486 21:01:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:01.486 00:05:01.486 real 0m15.644s 00:05:01.486 user 0m34.025s 00:05:01.486 sys 0m2.408s 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.486 21:01:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.486 ************************************ 00:05:01.486 END TEST app_repeat 00:05:01.486 ************************************ 00:05:01.486 21:01:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:01.486 21:01:48 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:01.487 21:01:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.487 21:01:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.487 21:01:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.487 ************************************ 00:05:01.487 START TEST cpu_locks 00:05:01.487 ************************************ 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:01.487 * Looking for test storage... 00:05:01.487 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.487 21:01:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.487 --rc genhtml_branch_coverage=1 00:05:01.487 --rc genhtml_function_coverage=1 00:05:01.487 --rc genhtml_legend=1 00:05:01.487 --rc geninfo_all_blocks=1 00:05:01.487 --rc geninfo_unexecuted_blocks=1 00:05:01.487 00:05:01.487 ' 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.487 --rc genhtml_branch_coverage=1 00:05:01.487 --rc genhtml_function_coverage=1 00:05:01.487 --rc genhtml_legend=1 00:05:01.487 --rc geninfo_all_blocks=1 00:05:01.487 --rc geninfo_unexecuted_blocks=1 00:05:01.487 00:05:01.487 ' 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.487 --rc genhtml_branch_coverage=1 00:05:01.487 --rc genhtml_function_coverage=1 00:05:01.487 --rc genhtml_legend=1 00:05:01.487 --rc geninfo_all_blocks=1 00:05:01.487 --rc geninfo_unexecuted_blocks=1 00:05:01.487 00:05:01.487 ' 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.487 --rc genhtml_branch_coverage=1 00:05:01.487 --rc genhtml_function_coverage=1 00:05:01.487 --rc genhtml_legend=1 00:05:01.487 --rc geninfo_all_blocks=1 00:05:01.487 --rc geninfo_unexecuted_blocks=1 00:05:01.487 00:05:01.487 ' 00:05:01.487 21:01:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:01.487 21:01:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:01.487 21:01:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:01.487 21:01:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.487 21:01:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.487 ************************************ 00:05:01.487 START TEST default_locks 00:05:01.487 ************************************ 00:05:01.487 21:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:01.487 21:01:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3716041 00:05:01.487 21:01:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3716041 00:05:01.487 21:01:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.487 21:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3716041 ']' 00:05:01.487 21:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.487 21:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.487 21:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.487 21:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.487 21:01:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.487 [2024-11-26 21:01:48.887389] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:01.487 [2024-11-26 21:01:48.887429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716041 ] 00:05:01.747 [2024-11-26 21:01:48.945836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.747 [2024-11-26 21:01:48.983280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.005 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.005 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:02.005 21:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3716041 00:05:02.005 21:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3716041 00:05:02.005 21:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.264 lslocks: write error 00:05:02.264 21:01:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3716041 00:05:02.264 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3716041 ']' 00:05:02.264 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3716041 00:05:02.264 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:02.264 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.264 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3716041 00:05:02.524 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.524 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.524 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3716041' 00:05:02.524 killing process with pid 3716041 00:05:02.524 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3716041 00:05:02.524 21:01:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3716041 00:05:02.783 21:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3716041 00:05:02.783 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:02.783 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3716041 00:05:02.783 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:02.783 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.783 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:02.783 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.783 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3716041 00:05:02.783 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3716041 ']' 00:05:02.783 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.784 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3716041) - No such process 00:05:02.784 ERROR: process (pid: 3716041) is no longer running 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:02.784 00:05:02.784 real 0m1.182s 00:05:02.784 user 0m1.134s 00:05:02.784 sys 0m0.515s 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.784 21:01:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.784 ************************************ 00:05:02.784 END TEST default_locks 00:05:02.784 ************************************ 00:05:02.784 21:01:50 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:02.784 21:01:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.784 21:01:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.784 21:01:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.784 ************************************ 00:05:02.784 START TEST default_locks_via_rpc 00:05:02.784 ************************************ 00:05:02.784 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:02.784 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3716243 00:05:02.784 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3716243 00:05:02.784 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.784 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3716243 ']' 00:05:02.784 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.784 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.784 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.784 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.784 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.784 [2024-11-26 21:01:50.140584] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:02.784 [2024-11-26 21:01:50.140628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716243 ] 00:05:02.784 [2024-11-26 21:01:50.198734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.044 [2024-11-26 21:01:50.238801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3716243 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.044 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3716243 00:05:03.613 21:01:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3716243 00:05:03.613 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3716243 ']' 00:05:03.613 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3716243 00:05:03.613 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.613 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.613 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3716243 00:05:03.613 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.613 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.613 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3716243' 00:05:03.613 killing process with pid 3716243 00:05:03.613 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3716243 00:05:03.613 21:01:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3716243 00:05:03.873 00:05:03.873 real 0m1.041s 00:05:03.873 user 0m0.984s 00:05:03.873 sys 0m0.465s 00:05:03.873 21:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.873 21:01:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.873 ************************************ 00:05:03.873 END TEST default_locks_via_rpc 00:05:03.873 ************************************ 00:05:03.873 21:01:51 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:03.873 21:01:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.873 21:01:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.873 21:01:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.873 ************************************ 00:05:03.873 START TEST non_locking_app_on_locked_coremask 00:05:03.873 ************************************ 00:05:03.873 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:03.873 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3716522 00:05:03.873 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3716522 /var/tmp/spdk.sock 00:05:03.873 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.873 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3716522 ']' 00:05:03.873 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.873 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.873 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.873 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.873 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.873 [2024-11-26 21:01:51.228133] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:03.873 [2024-11-26 21:01:51.228170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716522 ] 00:05:03.873 [2024-11-26 21:01:51.284266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.132 [2024-11-26 21:01:51.324189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.132 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.132 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:04.132 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3716540 00:05:04.132 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3716540 /var/tmp/spdk2.sock 00:05:04.132 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:04.132 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3716540 ']' 00:05:04.132 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.132 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.132 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.132 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.132 21:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:04.392 [2024-11-26 21:01:51.582498] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:04.392 [2024-11-26 21:01:51.582543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3716540 ] 00:05:04.392 [2024-11-26 21:01:51.667145] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:04.392 [2024-11-26 21:01:51.667171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.392 [2024-11-26 21:01:51.748064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.327 21:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.327 21:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:05.327 21:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3716522 00:05:05.327 21:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3716522 00:05:05.327 21:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.586 lslocks: write error 00:05:05.586 21:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3716522 00:05:05.586 21:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3716522 ']' 00:05:05.586 21:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3716522 00:05:05.586 21:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:05.586 21:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.586 21:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3716522 00:05:05.845 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.845 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.845 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3716522' 00:05:05.845 killing process with pid 3716522 00:05:05.845 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3716522 00:05:05.845 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3716522 00:05:06.413 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3716540 00:05:06.413 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3716540 ']' 00:05:06.413 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3716540 00:05:06.413 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:06.413 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.413 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3716540 00:05:06.413 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.413 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.413 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3716540' 00:05:06.413 killing process with pid 3716540 00:05:06.413 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3716540 00:05:06.413 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3716540 00:05:06.673 00:05:06.673 real 0m2.794s 00:05:06.673 user 0m2.929s 00:05:06.673 sys 0m0.910s 00:05:06.673 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.673 21:01:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.673 ************************************ 00:05:06.673 END TEST non_locking_app_on_locked_coremask 00:05:06.673 ************************************ 00:05:06.673 21:01:54 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:06.673 21:01:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.673 21:01:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.673 21:01:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.673 ************************************ 00:05:06.673 START TEST locking_app_on_unlocked_coremask 00:05:06.673 ************************************ 00:05:06.673 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:06.673 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3717090 00:05:06.673 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3717090 /var/tmp/spdk.sock 00:05:06.673 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:06.673 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3717090 ']' 00:05:06.673 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.673 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:06.673 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.673 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:06.673 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.673 [2024-11-26 21:01:54.092734] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:06.673 [2024-11-26 21:01:54.092772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717090 ] 00:05:06.932 [2024-11-26 21:01:54.149063] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.932 [2024-11-26 21:01:54.149086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.932 [2024-11-26 21:01:54.187968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.191 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.191 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:07.191 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3717103 00:05:07.191 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.191 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3717103 /var/tmp/spdk2.sock 00:05:07.191 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3717103 ']' 00:05:07.191 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.191 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.191 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.191 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.191 21:01:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.191 [2024-11-26 21:01:54.426272] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:07.191 [2024-11-26 21:01:54.426315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717103 ] 00:05:07.191 [2024-11-26 21:01:54.510458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.191 [2024-11-26 21:01:54.584189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.128 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.128 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.128 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3717103 00:05:08.128 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3717103 00:05:08.128 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.387 lslocks: write error 00:05:08.387 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3717090 00:05:08.387 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3717090 ']' 00:05:08.387 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3717090 00:05:08.387 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:08.387 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.387 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3717090 00:05:08.387 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.387 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.387 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3717090' 00:05:08.387 killing process with pid 3717090 00:05:08.387 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3717090 00:05:08.387 21:01:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3717090 00:05:08.957 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3717103 00:05:08.957 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3717103 ']' 00:05:08.957 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3717103 00:05:08.957 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:08.957 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.957 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3717103 00:05:09.217 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.217 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.217 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3717103' 00:05:09.217 killing process with pid 3717103 00:05:09.217 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3717103 00:05:09.217 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3717103 00:05:09.476 00:05:09.476 real 0m2.642s 00:05:09.476 user 0m2.763s 00:05:09.476 sys 0m0.870s 00:05:09.476 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.476 21:01:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.476 ************************************ 00:05:09.476 END TEST locking_app_on_unlocked_coremask 00:05:09.476 ************************************ 00:05:09.476 21:01:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:09.476 21:01:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.476 21:01:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.476 21:01:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.476 ************************************ 00:05:09.476 START TEST locking_app_on_locked_coremask 00:05:09.476 ************************************ 00:05:09.476 21:01:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:09.476 21:01:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3717648 00:05:09.476 21:01:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3717648 /var/tmp/spdk.sock 00:05:09.476 21:01:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.476 21:01:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3717648 ']' 00:05:09.476 21:01:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.476 21:01:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.476 21:01:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.477 21:01:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.477 21:01:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.477 [2024-11-26 21:01:56.805174] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:09.477 [2024-11-26 21:01:56.805217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717648 ] 00:05:09.477 [2024-11-26 21:01:56.862980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.477 [2024-11-26 21:01:56.901555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3717658 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3717658 /var/tmp/spdk2.sock 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3717658 /var/tmp/spdk2.sock 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3717658 /var/tmp/spdk2.sock 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3717658 ']' 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:09.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.736 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.736 [2024-11-26 21:01:57.149020] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:09.736 [2024-11-26 21:01:57.149061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717658 ] 00:05:09.995 [2024-11-26 21:01:57.231876] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3717648 has claimed it. 00:05:09.995 [2024-11-26 21:01:57.231914] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:10.563 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3717658) - No such process 00:05:10.563 ERROR: process (pid: 3717658) is no longer running 00:05:10.563 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.563 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:10.563 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:10.563 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.563 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.563 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.563 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3717648 00:05:10.563 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3717648 00:05:10.563 21:01:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.823 lslocks: write error 00:05:10.823 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3717648 00:05:10.823 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3717648 ']' 00:05:10.823 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3717648 00:05:10.823 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.823 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.823 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3717648 00:05:10.823 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.823 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.823 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3717648' 00:05:10.823 killing process with pid 3717648 00:05:10.823 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3717648 00:05:10.823 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3717648 00:05:11.082 00:05:11.082 real 0m1.670s 00:05:11.082 user 0m1.782s 00:05:11.082 sys 0m0.553s 00:05:11.082 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.082 21:01:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.082 ************************************ 00:05:11.082 END TEST locking_app_on_locked_coremask 00:05:11.082 ************************************ 00:05:11.082 21:01:58 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:11.082 21:01:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.082 21:01:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.082 21:01:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.082 ************************************ 00:05:11.082 START TEST locking_overlapped_coremask 00:05:11.082 ************************************ 00:05:11.082 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:11.082 21:01:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3717948 00:05:11.082 21:01:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3717948 /var/tmp/spdk.sock 00:05:11.082 21:01:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:11.082 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3717948 ']' 00:05:11.082 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.082 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.082 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.082 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.082 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.342 [2024-11-26 21:01:58.523198] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:11.342 [2024-11-26 21:01:58.523234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3717948 ] 00:05:11.342 [2024-11-26 21:01:58.581072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:11.342 [2024-11-26 21:01:58.622360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.342 [2024-11-26 21:01:58.622379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.342 [2024-11-26 21:01:58.622381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3718056 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3718056 /var/tmp/spdk2.sock 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3718056 /var/tmp/spdk2.sock 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3718056 /var/tmp/spdk2.sock 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3718056 ']' 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:11.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.602 21:01:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.602 [2024-11-26 21:01:58.881830] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:11.602 [2024-11-26 21:01:58.881875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718056 ] 00:05:11.602 [2024-11-26 21:01:58.967385] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3717948 has claimed it. 00:05:11.602 [2024-11-26 21:01:58.967414] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:12.171 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3718056) - No such process 00:05:12.171 ERROR: process (pid: 3718056) is no longer running 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3717948 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3717948 ']' 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3717948 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3717948 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3717948' 00:05:12.171 killing process with pid 3717948 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3717948 00:05:12.171 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3717948 00:05:12.431 00:05:12.431 real 0m1.389s 00:05:12.431 user 0m3.876s 00:05:12.431 sys 0m0.361s 00:05:12.431 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.431 21:01:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.431 ************************************ 00:05:12.431 END TEST locking_overlapped_coremask 00:05:12.431 ************************************ 00:05:12.690 21:01:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:12.690 21:01:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.690 21:01:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.690 21:01:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.690 ************************************ 00:05:12.690 START TEST locking_overlapped_coremask_via_rpc 00:05:12.690 ************************************ 00:05:12.690 21:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:12.690 21:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3718241 00:05:12.690 21:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3718241 /var/tmp/spdk.sock 00:05:12.690 21:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:12.690 21:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3718241 ']' 00:05:12.690 21:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.690 21:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.690 21:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.690 21:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.690 21:01:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.690 [2024-11-26 21:01:59.987312] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:12.690 [2024-11-26 21:01:59.987357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718241 ] 00:05:12.690 [2024-11-26 21:02:00.050288] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.690 [2024-11-26 21:02:00.050318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.690 [2024-11-26 21:02:00.092637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.690 [2024-11-26 21:02:00.092728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.690 [2024-11-26 21:02:00.092729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.949 21:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.949 21:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.949 21:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3718385 00:05:12.949 21:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3718385 /var/tmp/spdk2.sock 00:05:12.949 21:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:12.949 21:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3718385 ']' 00:05:12.949 21:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.949 21:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.949 21:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.949 21:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.949 21:02:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.949 [2024-11-26 21:02:00.351647] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:12.949 [2024-11-26 21:02:00.351701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718385 ] 00:05:13.208 [2024-11-26 21:02:00.437128] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.208 [2024-11-26 21:02:00.437157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:13.208 [2024-11-26 21:02:00.517119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:13.208 [2024-11-26 21:02:00.520303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:13.208 [2024-11-26 21:02:00.520304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.776 [2024-11-26 21:02:01.192323] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3718241 has claimed it. 00:05:13.776 request: 00:05:13.776 { 00:05:13.776 "method": "framework_enable_cpumask_locks", 00:05:13.776 "req_id": 1 00:05:13.776 } 00:05:13.776 Got JSON-RPC error response 00:05:13.776 response: 00:05:13.776 { 00:05:13.776 "code": -32603, 00:05:13.776 "message": "Failed to claim CPU core: 2" 00:05:13.776 } 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3718241 /var/tmp/spdk.sock 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3718241 ']' 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.776 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.035 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.035 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.035 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3718385 /var/tmp/spdk2.sock 00:05:14.035 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3718385 ']' 00:05:14.035 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.035 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.035 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.035 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.035 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.294 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.294 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.294 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:14.294 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:14.294 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:14.294 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:14.294 00:05:14.294 real 0m1.642s 00:05:14.294 user 0m0.749s 00:05:14.294 sys 0m0.140s 00:05:14.294 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.294 21:02:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.294 ************************************ 00:05:14.294 END TEST locking_overlapped_coremask_via_rpc 00:05:14.294 ************************************ 00:05:14.294 21:02:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:14.294 21:02:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3718241 ]] 00:05:14.294 21:02:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3718241 00:05:14.294 21:02:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3718241 ']' 00:05:14.294 21:02:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3718241 00:05:14.294 21:02:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:14.294 21:02:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.294 21:02:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3718241 00:05:14.294 21:02:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.294 21:02:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.294 21:02:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3718241' 00:05:14.294 killing process with pid 3718241 00:05:14.294 21:02:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3718241 00:05:14.294 21:02:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3718241 00:05:14.554 21:02:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3718385 ]] 00:05:14.554 21:02:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3718385 00:05:14.554 21:02:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3718385 ']' 00:05:14.554 21:02:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3718385 00:05:14.554 21:02:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:14.554 21:02:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.554 21:02:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3718385 00:05:14.814 21:02:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:14.814 21:02:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:14.814 21:02:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3718385' 00:05:14.814 killing process with pid 3718385 00:05:14.814 21:02:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3718385 00:05:14.814 21:02:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3718385 00:05:15.074 21:02:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:15.074 21:02:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:15.074 21:02:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3718241 ]] 00:05:15.074 21:02:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3718241 00:05:15.074 21:02:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3718241 ']' 00:05:15.074 21:02:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3718241 00:05:15.074 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3718241) - No such process 00:05:15.074 21:02:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3718241 is not found' 00:05:15.074 Process with pid 3718241 is not found 00:05:15.074 21:02:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3718385 ]] 00:05:15.074 21:02:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3718385 00:05:15.074 21:02:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3718385 ']' 00:05:15.074 21:02:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3718385 00:05:15.074 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3718385) - No such process 00:05:15.074 21:02:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3718385 is not found' 00:05:15.074 Process with pid 3718385 is not found 00:05:15.074 21:02:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:15.074 00:05:15.074 real 0m13.694s 00:05:15.074 user 0m23.686s 00:05:15.074 sys 0m4.724s 00:05:15.074 21:02:02 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.074 21:02:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.074 ************************************ 00:05:15.074 END TEST cpu_locks 00:05:15.074 ************************************ 00:05:15.074 00:05:15.074 real 0m37.112s 00:05:15.074 user 1m9.838s 00:05:15.074 sys 0m8.020s 00:05:15.074 21:02:02 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.074 21:02:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.074 ************************************ 00:05:15.074 END TEST event 00:05:15.074 ************************************ 00:05:15.074 21:02:02 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:15.074 21:02:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.074 21:02:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.074 21:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:15.074 ************************************ 00:05:15.074 START TEST thread 00:05:15.074 ************************************ 00:05:15.074 21:02:02 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:05:15.074 * Looking for test storage... 00:05:15.074 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:05:15.075 21:02:02 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:15.335 21:02:02 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:15.335 21:02:02 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:15.335 21:02:02 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:15.335 21:02:02 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.335 21:02:02 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.335 21:02:02 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.335 21:02:02 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.335 21:02:02 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.335 21:02:02 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.335 21:02:02 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.335 21:02:02 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.335 21:02:02 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.335 21:02:02 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.335 21:02:02 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.335 21:02:02 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:15.335 21:02:02 thread -- scripts/common.sh@345 -- # : 1 00:05:15.335 21:02:02 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.335 21:02:02 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.335 21:02:02 thread -- scripts/common.sh@365 -- # decimal 1 00:05:15.335 21:02:02 thread -- scripts/common.sh@353 -- # local d=1 00:05:15.335 21:02:02 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.335 21:02:02 thread -- scripts/common.sh@355 -- # echo 1 00:05:15.335 21:02:02 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.335 21:02:02 thread -- scripts/common.sh@366 -- # decimal 2 00:05:15.335 21:02:02 thread -- scripts/common.sh@353 -- # local d=2 00:05:15.335 21:02:02 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.335 21:02:02 thread -- scripts/common.sh@355 -- # echo 2 00:05:15.335 21:02:02 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.335 21:02:02 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.335 21:02:02 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.335 21:02:02 thread -- scripts/common.sh@368 -- # return 0 00:05:15.336 21:02:02 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.336 21:02:02 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:15.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.336 --rc genhtml_branch_coverage=1 00:05:15.336 --rc genhtml_function_coverage=1 00:05:15.336 --rc genhtml_legend=1 00:05:15.336 --rc geninfo_all_blocks=1 00:05:15.336 --rc geninfo_unexecuted_blocks=1 00:05:15.336 00:05:15.336 ' 00:05:15.336 21:02:02 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:15.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.336 --rc genhtml_branch_coverage=1 00:05:15.336 --rc genhtml_function_coverage=1 00:05:15.336 --rc genhtml_legend=1 00:05:15.336 --rc geninfo_all_blocks=1 00:05:15.336 --rc geninfo_unexecuted_blocks=1 00:05:15.336 00:05:15.336 ' 00:05:15.336 21:02:02 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:15.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.336 --rc genhtml_branch_coverage=1 00:05:15.336 --rc genhtml_function_coverage=1 00:05:15.336 --rc genhtml_legend=1 00:05:15.336 --rc geninfo_all_blocks=1 00:05:15.336 --rc geninfo_unexecuted_blocks=1 00:05:15.336 00:05:15.336 ' 00:05:15.336 21:02:02 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:15.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.336 --rc genhtml_branch_coverage=1 00:05:15.336 --rc genhtml_function_coverage=1 00:05:15.336 --rc genhtml_legend=1 00:05:15.336 --rc geninfo_all_blocks=1 00:05:15.336 --rc geninfo_unexecuted_blocks=1 00:05:15.336 00:05:15.336 ' 00:05:15.336 21:02:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.336 21:02:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:15.336 21:02:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.336 21:02:02 thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.336 ************************************ 00:05:15.336 START TEST thread_poller_perf 00:05:15.336 ************************************ 00:05:15.336 21:02:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:15.336 [2024-11-26 21:02:02.634434] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:15.336 [2024-11-26 21:02:02.634502] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3718883 ] 00:05:15.336 [2024-11-26 21:02:02.694696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.336 [2024-11-26 21:02:02.731619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.336 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:16.718 [2024-11-26T20:02:04.154Z] ====================================== 00:05:16.718 [2024-11-26T20:02:04.154Z] busy:2705660942 (cyc) 00:05:16.718 [2024-11-26T20:02:04.154Z] total_run_count: 450000 00:05:16.718 [2024-11-26T20:02:04.154Z] tsc_hz: 2700000000 (cyc) 00:05:16.718 [2024-11-26T20:02:04.154Z] ====================================== 00:05:16.718 [2024-11-26T20:02:04.154Z] poller_cost: 6012 (cyc), 2226 (nsec) 00:05:16.718 00:05:16.718 real 0m1.158s 00:05:16.718 user 0m1.090s 00:05:16.718 sys 0m0.065s 00:05:16.718 21:02:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.718 21:02:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:16.718 ************************************ 00:05:16.718 END TEST thread_poller_perf 00:05:16.718 ************************************ 00:05:16.718 21:02:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:16.718 21:02:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:16.718 21:02:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.718 21:02:03 thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.718 ************************************ 00:05:16.718 START TEST thread_poller_perf 00:05:16.718 ************************************ 00:05:16.718 21:02:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:16.718 [2024-11-26 21:02:03.860772] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:16.718 [2024-11-26 21:02:03.860844] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719163 ] 00:05:16.718 [2024-11-26 21:02:03.921937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.718 [2024-11-26 21:02:03.958909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.718 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:17.742 [2024-11-26T20:02:05.178Z] ====================================== 00:05:17.742 [2024-11-26T20:02:05.178Z] busy:2701732770 (cyc) 00:05:17.742 [2024-11-26T20:02:05.178Z] total_run_count: 5936000 00:05:17.742 [2024-11-26T20:02:05.178Z] tsc_hz: 2700000000 (cyc) 00:05:17.742 [2024-11-26T20:02:05.178Z] ====================================== 00:05:17.742 [2024-11-26T20:02:05.178Z] poller_cost: 455 (cyc), 168 (nsec) 00:05:17.742 00:05:17.742 real 0m1.154s 00:05:17.742 user 0m1.088s 00:05:17.742 sys 0m0.062s 00:05:17.742 21:02:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.742 21:02:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.742 ************************************ 00:05:17.742 END TEST thread_poller_perf 00:05:17.742 ************************************ 00:05:17.742 21:02:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:17.742 00:05:17.742 real 0m2.601s 00:05:17.742 user 0m2.328s 00:05:17.742 sys 0m0.283s 00:05:17.742 21:02:05 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.742 21:02:05 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.742 ************************************ 00:05:17.742 END TEST thread 00:05:17.742 ************************************ 00:05:17.742 21:02:05 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:17.743 21:02:05 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:17.743 21:02:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.743 21:02:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.743 21:02:05 -- common/autotest_common.sh@10 -- # set +x 00:05:17.743 ************************************ 00:05:17.743 START TEST app_cmdline 00:05:17.743 ************************************ 00:05:17.743 21:02:05 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:05:18.002 * Looking for test storage... 00:05:18.002 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.002 21:02:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.002 --rc genhtml_branch_coverage=1 00:05:18.002 --rc genhtml_function_coverage=1 00:05:18.002 --rc genhtml_legend=1 00:05:18.002 --rc geninfo_all_blocks=1 00:05:18.002 --rc geninfo_unexecuted_blocks=1 00:05:18.002 00:05:18.002 ' 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.002 --rc genhtml_branch_coverage=1 00:05:18.002 --rc genhtml_function_coverage=1 00:05:18.002 --rc genhtml_legend=1 00:05:18.002 --rc geninfo_all_blocks=1 00:05:18.002 --rc geninfo_unexecuted_blocks=1 00:05:18.002 00:05:18.002 ' 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.002 --rc genhtml_branch_coverage=1 00:05:18.002 --rc genhtml_function_coverage=1 00:05:18.002 --rc genhtml_legend=1 00:05:18.002 --rc geninfo_all_blocks=1 00:05:18.002 --rc geninfo_unexecuted_blocks=1 00:05:18.002 00:05:18.002 ' 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.002 --rc genhtml_branch_coverage=1 00:05:18.002 --rc genhtml_function_coverage=1 00:05:18.002 --rc genhtml_legend=1 00:05:18.002 --rc geninfo_all_blocks=1 00:05:18.002 --rc geninfo_unexecuted_blocks=1 00:05:18.002 00:05:18.002 ' 00:05:18.002 21:02:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:18.002 21:02:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3719499 00:05:18.002 21:02:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3719499 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3719499 ']' 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.002 21:02:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.002 21:02:05 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:18.002 [2024-11-26 21:02:05.302486] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:18.002 [2024-11-26 21:02:05.302528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3719499 ] 00:05:18.002 [2024-11-26 21:02:05.359926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.002 [2024-11-26 21:02:05.398927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.261 21:02:05 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.261 21:02:05 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:18.261 21:02:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:18.518 { 00:05:18.519 "version": "SPDK v25.01-pre git sha1 fb1630bf7", 00:05:18.519 "fields": { 00:05:18.519 "major": 25, 00:05:18.519 "minor": 1, 00:05:18.519 "patch": 0, 00:05:18.519 "suffix": "-pre", 00:05:18.519 "commit": "fb1630bf7" 00:05:18.519 } 00:05:18.519 } 00:05:18.519 21:02:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:18.519 21:02:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:18.519 21:02:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:18.519 21:02:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:18.519 21:02:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:18.519 21:02:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:18.519 21:02:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.519 21:02:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:18.519 21:02:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:18.519 21:02:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:05:18.519 21:02:05 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:18.778 request: 00:05:18.778 { 00:05:18.778 "method": "env_dpdk_get_mem_stats", 00:05:18.778 "req_id": 1 00:05:18.778 } 00:05:18.778 Got JSON-RPC error response 00:05:18.778 response: 00:05:18.778 { 00:05:18.778 "code": -32601, 00:05:18.778 "message": "Method not found" 00:05:18.778 } 00:05:18.778 21:02:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:18.778 21:02:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.778 21:02:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:18.778 21:02:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.778 21:02:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3719499 00:05:18.778 21:02:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3719499 ']' 00:05:18.778 21:02:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3719499 00:05:18.778 21:02:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:18.778 21:02:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.778 21:02:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3719499 00:05:18.778 21:02:06 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.778 21:02:06 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.778 21:02:06 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3719499' 00:05:18.778 killing process with pid 3719499 00:05:18.778 21:02:06 app_cmdline -- common/autotest_common.sh@973 -- # kill 3719499 00:05:18.778 21:02:06 app_cmdline -- common/autotest_common.sh@978 -- # wait 3719499 00:05:19.037 00:05:19.037 real 0m1.225s 00:05:19.037 user 0m1.422s 00:05:19.037 sys 0m0.407s 00:05:19.037 21:02:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.037 21:02:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:19.037 ************************************ 00:05:19.037 END TEST app_cmdline 00:05:19.037 ************************************ 00:05:19.037 21:02:06 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:19.037 21:02:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.037 21:02:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.038 21:02:06 -- common/autotest_common.sh@10 -- # set +x 00:05:19.038 ************************************ 00:05:19.038 START TEST version 00:05:19.038 ************************************ 00:05:19.038 21:02:06 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:05:19.038 * Looking for test storage... 00:05:19.298 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:05:19.298 21:02:06 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.298 21:02:06 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.298 21:02:06 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.298 21:02:06 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.298 21:02:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.298 21:02:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.298 21:02:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.298 21:02:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.298 21:02:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.298 21:02:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.298 21:02:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.298 21:02:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.298 21:02:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.298 21:02:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.298 21:02:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.298 21:02:06 version -- scripts/common.sh@344 -- # case "$op" in 00:05:19.298 21:02:06 version -- scripts/common.sh@345 -- # : 1 00:05:19.298 21:02:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.298 21:02:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.298 21:02:06 version -- scripts/common.sh@365 -- # decimal 1 00:05:19.298 21:02:06 version -- scripts/common.sh@353 -- # local d=1 00:05:19.298 21:02:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.298 21:02:06 version -- scripts/common.sh@355 -- # echo 1 00:05:19.298 21:02:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.298 21:02:06 version -- scripts/common.sh@366 -- # decimal 2 00:05:19.298 21:02:06 version -- scripts/common.sh@353 -- # local d=2 00:05:19.298 21:02:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.298 21:02:06 version -- scripts/common.sh@355 -- # echo 2 00:05:19.298 21:02:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.298 21:02:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.298 21:02:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.298 21:02:06 version -- scripts/common.sh@368 -- # return 0 00:05:19.298 21:02:06 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.298 21:02:06 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.298 --rc genhtml_branch_coverage=1 00:05:19.298 --rc genhtml_function_coverage=1 00:05:19.298 --rc genhtml_legend=1 00:05:19.298 --rc geninfo_all_blocks=1 00:05:19.298 --rc geninfo_unexecuted_blocks=1 00:05:19.298 00:05:19.298 ' 00:05:19.298 21:02:06 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.298 --rc genhtml_branch_coverage=1 00:05:19.298 --rc genhtml_function_coverage=1 00:05:19.298 --rc genhtml_legend=1 00:05:19.298 --rc geninfo_all_blocks=1 00:05:19.298 --rc geninfo_unexecuted_blocks=1 00:05:19.298 00:05:19.298 ' 00:05:19.298 21:02:06 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.298 --rc genhtml_branch_coverage=1 00:05:19.298 --rc genhtml_function_coverage=1 00:05:19.298 --rc genhtml_legend=1 00:05:19.298 --rc geninfo_all_blocks=1 00:05:19.298 --rc geninfo_unexecuted_blocks=1 00:05:19.298 00:05:19.298 ' 00:05:19.298 21:02:06 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.298 --rc genhtml_branch_coverage=1 00:05:19.298 --rc genhtml_function_coverage=1 00:05:19.298 --rc genhtml_legend=1 00:05:19.298 --rc geninfo_all_blocks=1 00:05:19.298 --rc geninfo_unexecuted_blocks=1 00:05:19.298 00:05:19.298 ' 00:05:19.298 21:02:06 version -- app/version.sh@17 -- # get_header_version major 00:05:19.298 21:02:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:19.298 21:02:06 version -- app/version.sh@14 -- # cut -f2 00:05:19.298 21:02:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.298 21:02:06 version -- app/version.sh@17 -- # major=25 00:05:19.298 21:02:06 version -- app/version.sh@18 -- # get_header_version minor 00:05:19.298 21:02:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:19.298 21:02:06 version -- app/version.sh@14 -- # cut -f2 00:05:19.298 21:02:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.298 21:02:06 version -- app/version.sh@18 -- # minor=1 00:05:19.298 21:02:06 version -- app/version.sh@19 -- # get_header_version patch 00:05:19.298 21:02:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:19.298 21:02:06 version -- app/version.sh@14 -- # cut -f2 00:05:19.298 21:02:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.298 21:02:06 version -- app/version.sh@19 -- # patch=0 00:05:19.298 21:02:06 version -- app/version.sh@20 -- # get_header_version suffix 00:05:19.298 21:02:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:05:19.298 21:02:06 version -- app/version.sh@14 -- # cut -f2 00:05:19.298 21:02:06 version -- app/version.sh@14 -- # tr -d '"' 00:05:19.298 21:02:06 version -- app/version.sh@20 -- # suffix=-pre 00:05:19.298 21:02:06 version -- app/version.sh@22 -- # version=25.1 00:05:19.298 21:02:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:19.298 21:02:06 version -- app/version.sh@28 -- # version=25.1rc0 00:05:19.298 21:02:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:05:19.298 21:02:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:19.298 21:02:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:19.298 21:02:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:19.298 00:05:19.298 real 0m0.231s 00:05:19.298 user 0m0.139s 00:05:19.298 sys 0m0.133s 00:05:19.298 21:02:06 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.298 21:02:06 version -- common/autotest_common.sh@10 -- # set +x 00:05:19.298 ************************************ 00:05:19.298 END TEST version 00:05:19.298 ************************************ 00:05:19.298 21:02:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:19.298 21:02:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:19.298 21:02:06 -- spdk/autotest.sh@194 -- # uname -s 00:05:19.298 21:02:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:19.298 21:02:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:19.298 21:02:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:19.299 21:02:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:19.299 21:02:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:19.299 21:02:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:19.299 21:02:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.299 21:02:06 -- common/autotest_common.sh@10 -- # set +x 00:05:19.299 21:02:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:19.299 21:02:06 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:19.299 21:02:06 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:19.299 21:02:06 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:19.299 21:02:06 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:05:19.299 21:02:06 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:19.299 21:02:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:19.299 21:02:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.299 21:02:06 -- common/autotest_common.sh@10 -- # set +x 00:05:19.558 ************************************ 00:05:19.558 START TEST nvmf_rdma 00:05:19.558 ************************************ 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:05:19.558 * Looking for test storage... 00:05:19.558 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.558 21:02:06 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.558 --rc genhtml_branch_coverage=1 00:05:19.558 --rc genhtml_function_coverage=1 00:05:19.558 --rc genhtml_legend=1 00:05:19.558 --rc geninfo_all_blocks=1 00:05:19.558 --rc geninfo_unexecuted_blocks=1 00:05:19.558 00:05:19.558 ' 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.558 --rc genhtml_branch_coverage=1 00:05:19.558 --rc genhtml_function_coverage=1 00:05:19.558 --rc genhtml_legend=1 00:05:19.558 --rc geninfo_all_blocks=1 00:05:19.558 --rc geninfo_unexecuted_blocks=1 00:05:19.558 00:05:19.558 ' 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.558 --rc genhtml_branch_coverage=1 00:05:19.558 --rc genhtml_function_coverage=1 00:05:19.558 --rc genhtml_legend=1 00:05:19.558 --rc geninfo_all_blocks=1 00:05:19.558 --rc geninfo_unexecuted_blocks=1 00:05:19.558 00:05:19.558 ' 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.558 --rc genhtml_branch_coverage=1 00:05:19.558 --rc genhtml_function_coverage=1 00:05:19.558 --rc genhtml_legend=1 00:05:19.558 --rc geninfo_all_blocks=1 00:05:19.558 --rc geninfo_unexecuted_blocks=1 00:05:19.558 00:05:19.558 ' 00:05:19.558 21:02:06 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:05:19.558 21:02:06 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:19.558 21:02:06 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.558 21:02:06 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:05:19.558 ************************************ 00:05:19.558 START TEST nvmf_target_core 00:05:19.558 ************************************ 00:05:19.558 21:02:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:05:19.817 * Looking for test storage... 00:05:19.817 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.817 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.818 --rc genhtml_branch_coverage=1 00:05:19.818 --rc genhtml_function_coverage=1 00:05:19.818 --rc genhtml_legend=1 00:05:19.818 --rc geninfo_all_blocks=1 00:05:19.818 --rc geninfo_unexecuted_blocks=1 00:05:19.818 00:05:19.818 ' 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.818 --rc genhtml_branch_coverage=1 00:05:19.818 --rc genhtml_function_coverage=1 00:05:19.818 --rc genhtml_legend=1 00:05:19.818 --rc geninfo_all_blocks=1 00:05:19.818 --rc geninfo_unexecuted_blocks=1 00:05:19.818 00:05:19.818 ' 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.818 --rc genhtml_branch_coverage=1 00:05:19.818 --rc genhtml_function_coverage=1 00:05:19.818 --rc genhtml_legend=1 00:05:19.818 --rc geninfo_all_blocks=1 00:05:19.818 --rc geninfo_unexecuted_blocks=1 00:05:19.818 00:05:19.818 ' 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.818 --rc genhtml_branch_coverage=1 00:05:19.818 --rc genhtml_function_coverage=1 00:05:19.818 --rc genhtml_legend=1 00:05:19.818 --rc geninfo_all_blocks=1 00:05:19.818 --rc geninfo_unexecuted_blocks=1 00:05:19.818 00:05:19.818 ' 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.818 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:19.818 ************************************ 00:05:19.818 START TEST nvmf_abort 00:05:19.818 ************************************ 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:05:19.818 * Looking for test storage... 00:05:19.818 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.818 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.078 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.078 --rc genhtml_branch_coverage=1 00:05:20.079 --rc genhtml_function_coverage=1 00:05:20.079 --rc genhtml_legend=1 00:05:20.079 --rc geninfo_all_blocks=1 00:05:20.079 --rc geninfo_unexecuted_blocks=1 00:05:20.079 00:05:20.079 ' 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.079 --rc genhtml_branch_coverage=1 00:05:20.079 --rc genhtml_function_coverage=1 00:05:20.079 --rc genhtml_legend=1 00:05:20.079 --rc geninfo_all_blocks=1 00:05:20.079 --rc geninfo_unexecuted_blocks=1 00:05:20.079 00:05:20.079 ' 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.079 --rc genhtml_branch_coverage=1 00:05:20.079 --rc genhtml_function_coverage=1 00:05:20.079 --rc genhtml_legend=1 00:05:20.079 --rc geninfo_all_blocks=1 00:05:20.079 --rc geninfo_unexecuted_blocks=1 00:05:20.079 00:05:20.079 ' 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.079 --rc genhtml_branch_coverage=1 00:05:20.079 --rc genhtml_function_coverage=1 00:05:20.079 --rc genhtml_legend=1 00:05:20.079 --rc geninfo_all_blocks=1 00:05:20.079 --rc geninfo_unexecuted_blocks=1 00:05:20.079 00:05:20.079 ' 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:20.079 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:20.079 21:02:07 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:26.655 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:26.655 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:26.655 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:26.656 Found net devices under 0000:18:00.0: mlx_0_0 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:26.656 Found net devices under 0000:18:00.1: mlx_0_1 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:26.656 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:26.656 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:05:26.656 altname enp24s0f0np0 00:05:26.656 altname ens785f0np0 00:05:26.656 inet 192.168.100.8/24 scope global mlx_0_0 00:05:26.656 valid_lft forever preferred_lft forever 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:26.656 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:26.656 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:05:26.656 altname enp24s0f1np1 00:05:26.656 altname ens785f1np1 00:05:26.656 inet 192.168.100.9/24 scope global mlx_0_1 00:05:26.656 valid_lft forever preferred_lft forever 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:26.656 21:02:12 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:26.656 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:26.656 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:26.656 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:26.656 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:26.656 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:26.656 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:26.656 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:26.656 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:26.656 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:26.656 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:26.657 192.168.100.9' 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:26.657 192.168.100.9' 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:26.657 192.168.100.9' 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3723252 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3723252 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3723252 ']' 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 [2024-11-26 21:02:13.096594] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:26.657 [2024-11-26 21:02:13.096653] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:26.657 [2024-11-26 21:02:13.158306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.657 [2024-11-26 21:02:13.200814] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:26.657 [2024-11-26 21:02:13.200848] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:26.657 [2024-11-26 21:02:13.200855] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:26.657 [2024-11-26 21:02:13.200860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:26.657 [2024-11-26 21:02:13.200866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:26.657 [2024-11-26 21:02:13.202111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.657 [2024-11-26 21:02:13.202127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.657 [2024-11-26 21:02:13.202130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 [2024-11-26 21:02:13.374063] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x241fcf0/0x24241e0) succeed. 00:05:26.657 [2024-11-26 21:02:13.389849] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24212e0/0x2465880) succeed. 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 Malloc0 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 Delay0 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 [2024-11-26 21:02:13.549827] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.657 21:02:13 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:26.657 [2024-11-26 21:02:13.650000] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:28.565 Initializing NVMe Controllers 00:05:28.565 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:05:28.565 controller IO queue size 128 less than required 00:05:28.565 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:28.565 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:28.565 Initialization complete. Launching workers. 00:05:28.565 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 46796 00:05:28.565 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 46857, failed to submit 62 00:05:28.565 success 46797, unsuccessful 60, failed 0 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:05:28.565 rmmod nvme_rdma 00:05:28.565 rmmod nvme_fabrics 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3723252 ']' 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3723252 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3723252 ']' 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3723252 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723252 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723252' 00:05:28.565 killing process with pid 3723252 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3723252 00:05:28.565 21:02:15 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3723252 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:05:28.825 00:05:28.825 real 0m8.928s 00:05:28.825 user 0m12.358s 00:05:28.825 sys 0m4.762s 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:28.825 ************************************ 00:05:28.825 END TEST nvmf_abort 00:05:28.825 ************************************ 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:28.825 ************************************ 00:05:28.825 START TEST nvmf_ns_hotplug_stress 00:05:28.825 ************************************ 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:05:28.825 * Looking for test storage... 00:05:28.825 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.825 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:29.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.087 --rc genhtml_branch_coverage=1 00:05:29.087 --rc genhtml_function_coverage=1 00:05:29.087 --rc genhtml_legend=1 00:05:29.087 --rc geninfo_all_blocks=1 00:05:29.087 --rc geninfo_unexecuted_blocks=1 00:05:29.087 00:05:29.087 ' 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:29.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.087 --rc genhtml_branch_coverage=1 00:05:29.087 --rc genhtml_function_coverage=1 00:05:29.087 --rc genhtml_legend=1 00:05:29.087 --rc geninfo_all_blocks=1 00:05:29.087 --rc geninfo_unexecuted_blocks=1 00:05:29.087 00:05:29.087 ' 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:29.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.087 --rc genhtml_branch_coverage=1 00:05:29.087 --rc genhtml_function_coverage=1 00:05:29.087 --rc genhtml_legend=1 00:05:29.087 --rc geninfo_all_blocks=1 00:05:29.087 --rc geninfo_unexecuted_blocks=1 00:05:29.087 00:05:29.087 ' 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:29.087 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.087 --rc genhtml_branch_coverage=1 00:05:29.087 --rc genhtml_function_coverage=1 00:05:29.087 --rc genhtml_legend=1 00:05:29.087 --rc geninfo_all_blocks=1 00:05:29.087 --rc geninfo_unexecuted_blocks=1 00:05:29.087 00:05:29.087 ' 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:29.087 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:29.088 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:29.088 21:02:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:05:35.659 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:05:35.659 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:35.659 Found net devices under 0000:18:00.0: mlx_0_0 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:35.659 Found net devices under 0000:18:00.1: mlx_0_1 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:35.659 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:35.660 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:35.660 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:05:35.660 altname enp24s0f0np0 00:05:35.660 altname ens785f0np0 00:05:35.660 inet 192.168.100.8/24 scope global mlx_0_0 00:05:35.660 valid_lft forever preferred_lft forever 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:35.660 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:35.660 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:05:35.660 altname enp24s0f1np1 00:05:35.660 altname ens785f1np1 00:05:35.660 inet 192.168.100.9/24 scope global mlx_0_1 00:05:35.660 valid_lft forever preferred_lft forever 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:35.660 21:02:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:35.660 192.168.100.9' 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:35.660 192.168.100.9' 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:35.660 192.168.100.9' 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3727243 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3727243 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3727243 ']' 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.660 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.660 [2024-11-26 21:02:22.134030] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:05:35.660 [2024-11-26 21:02:22.134084] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:35.660 [2024-11-26 21:02:22.194107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.660 [2024-11-26 21:02:22.232752] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:35.660 [2024-11-26 21:02:22.232786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:35.660 [2024-11-26 21:02:22.232794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:35.660 [2024-11-26 21:02:22.232801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:35.660 [2024-11-26 21:02:22.232805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:35.660 [2024-11-26 21:02:22.234123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.661 [2024-11-26 21:02:22.234223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.661 [2024-11-26 21:02:22.234230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.661 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.661 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:35.661 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:35.661 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.661 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:35.661 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:35.661 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:35.661 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:05:35.661 [2024-11-26 21:02:22.544345] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7d7cf0/0x7dc1e0) succeed. 00:05:35.661 [2024-11-26 21:02:22.552302] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7d92e0/0x81d880) succeed. 00:05:35.661 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:35.661 21:02:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:35.661 [2024-11-26 21:02:23.000536] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:35.661 21:02:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:05:35.919 21:02:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:36.177 Malloc0 00:05:36.177 21:02:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:36.177 Delay0 00:05:36.177 21:02:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.436 21:02:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:36.695 NULL1 00:05:36.695 21:02:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:36.953 21:02:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3727541 00:05:36.953 21:02:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:36.953 21:02:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:36.953 21:02:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.889 Read completed with error (sct=0, sc=11) 00:05:37.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:37.889 21:02:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:38.149 21:02:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:38.149 21:02:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:38.408 true 00:05:38.408 21:02:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:38.408 21:02:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.345 21:02:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.345 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:39.345 21:02:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:39.345 21:02:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:39.604 true 00:05:39.604 21:02:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:39.604 21:02:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.542 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.543 21:02:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.543 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:40.543 21:02:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:40.543 21:02:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:40.802 true 00:05:40.802 21:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:40.802 21:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.739 21:02:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:41.739 21:02:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:41.739 21:02:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:41.997 true 00:05:41.997 21:02:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:41.997 21:02:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.932 21:02:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:42.932 21:02:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:42.932 21:02:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:43.192 true 00:05:43.192 21:02:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:43.192 21:02:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.129 21:02:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.129 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:44.129 21:02:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:44.129 21:02:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:44.388 true 00:05:44.388 21:02:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:44.388 21:02:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.326 21:02:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.326 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:45.326 21:02:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:45.326 21:02:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:45.585 true 00:05:45.585 21:02:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:45.585 21:02:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.522 21:02:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.522 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.523 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.523 21:02:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:46.523 21:02:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:46.782 true 00:05:46.782 21:02:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:46.782 21:02:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.719 21:02:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.719 21:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:47.719 21:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:47.978 true 00:05:47.978 21:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:47.978 21:02:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.916 21:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.916 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.916 21:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:48.916 21:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:49.174 true 00:05:49.174 21:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:49.174 21:02:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.112 21:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:50.112 21:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:50.112 21:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:50.370 true 00:05:50.370 21:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:50.371 21:02:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.308 21:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:51.566 21:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:51.566 21:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:51.566 true 00:05:51.566 21:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:51.566 21:02:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.503 21:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:52.761 21:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:52.761 21:02:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:52.761 true 00:05:52.761 21:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:52.761 21:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.699 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.699 21:02:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.958 21:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:53.958 21:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:53.958 true 00:05:53.958 21:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:53.958 21:02:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.895 21:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:54.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:55.154 21:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:55.154 21:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:55.154 true 00:05:55.154 21:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:55.154 21:02:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.093 21:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.093 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.351 21:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:56.351 21:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:56.351 true 00:05:56.351 21:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:56.351 21:02:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.388 21:02:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.388 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:57.388 21:02:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:57.388 21:02:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:57.658 true 00:05:57.658 21:02:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:57.658 21:02:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.593 21:02:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.593 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.593 21:02:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:58.594 21:02:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:58.852 true 00:05:58.852 21:02:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:05:58.852 21:02:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.790 21:02:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.790 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:59.790 21:02:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:59.790 21:02:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:00.049 true 00:06:00.049 21:02:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:06:00.049 21:02:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.987 21:02:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:00.987 21:02:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:00.987 21:02:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:01.246 true 00:06:01.246 21:02:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:06:01.246 21:02:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.182 21:02:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.182 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.182 21:02:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:02.182 21:02:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:02.441 true 00:06:02.441 21:02:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:06:02.441 21:02:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.379 21:02:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.379 21:02:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:03.379 21:02:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:03.637 true 00:06:03.637 21:02:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:06:03.637 21:02:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.896 21:02:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.896 21:02:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:03.896 21:02:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:04.155 true 00:06:04.155 21:02:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:06:04.155 21:02:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.534 21:02:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.534 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.534 21:02:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:05.534 21:02:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:05.534 true 00:06:05.793 21:02:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:06:05.793 21:02:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.730 21:02:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.730 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.730 21:02:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:06.730 21:02:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:06.730 true 00:06:06.989 21:02:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:06:06.990 21:02:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.926 21:02:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.926 21:02:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:07.927 21:02:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:07.927 true 00:06:08.186 21:02:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:06:08.186 21:02:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.186 21:02:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.444 21:02:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:08.444 21:02:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:08.704 true 00:06:08.704 21:02:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:06:08.704 21:02:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.704 21:02:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.963 21:02:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:08.963 21:02:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:09.222 true 00:06:09.222 21:02:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:06:09.222 21:02:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.482 21:02:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.482 21:02:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:09.482 21:02:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:09.741 Initializing NVMe Controllers 00:06:09.741 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:09.741 Controller IO queue size 128, less than required. 00:06:09.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:09.741 Controller IO queue size 128, less than required. 00:06:09.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:09.741 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:09.741 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:09.741 Initialization complete. Launching workers. 00:06:09.741 ======================================================== 00:06:09.741 Latency(us) 00:06:09.741 Device Information : IOPS MiB/s Average min max 00:06:09.741 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5989.33 2.92 19063.09 785.45 1127199.16 00:06:09.741 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 36086.77 17.62 3546.88 2110.67 269552.70 00:06:09.741 ======================================================== 00:06:09.741 Total : 42076.10 20.54 5755.54 785.45 1127199.16 00:06:09.741 00:06:09.741 true 00:06:09.741 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3727541 00:06:09.741 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3727541) - No such process 00:06:09.741 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3727541 00:06:09.741 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.000 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:10.000 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:10.000 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:10.000 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:10.000 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.000 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:10.260 null0 00:06:10.260 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.260 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.260 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:10.520 null1 00:06:10.520 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.520 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.520 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:10.520 null2 00:06:10.520 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.520 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.520 21:02:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:10.779 null3 00:06:10.780 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.780 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.780 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:11.039 null4 00:06:11.039 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.039 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.039 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:11.039 null5 00:06:11.298 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.298 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.298 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:11.298 null6 00:06:11.298 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.298 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.298 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:11.558 null7 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.558 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3734095 3734098 3734100 3734104 3734107 3734111 3734114 3734116 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.559 21:02:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.818 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.077 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.078 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.078 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.078 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.078 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.078 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.078 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.078 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.337 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.596 21:02:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.596 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.855 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.855 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.855 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.855 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.855 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.855 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.855 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.855 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.855 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.855 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.855 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.114 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.374 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.632 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.632 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.632 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.632 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.632 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.632 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.632 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.632 21:03:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.890 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.891 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.891 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.891 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.150 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.410 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.410 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.410 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.410 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.410 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.410 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.410 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.410 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.670 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.671 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.671 21:03:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.671 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.930 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.190 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.190 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.190 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.190 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.190 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.190 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.190 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.190 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:15.449 rmmod nvme_rdma 00:06:15.449 rmmod nvme_fabrics 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3727243 ']' 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3727243 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3727243 ']' 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3727243 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3727243 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3727243' 00:06:15.449 killing process with pid 3727243 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3727243 00:06:15.449 21:03:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3727243 00:06:15.709 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:15.709 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:15.709 00:06:15.709 real 0m46.843s 00:06:15.709 user 3m17.624s 00:06:15.709 sys 0m11.557s 00:06:15.709 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.709 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:15.709 ************************************ 00:06:15.709 END TEST nvmf_ns_hotplug_stress 00:06:15.709 ************************************ 00:06:15.709 21:03:03 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:15.709 21:03:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.709 21:03:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.709 21:03:03 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.709 ************************************ 00:06:15.709 START TEST nvmf_delete_subsystem 00:06:15.709 ************************************ 00:06:15.709 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:06:15.970 * Looking for test storage... 00:06:15.970 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.970 --rc genhtml_branch_coverage=1 00:06:15.970 --rc genhtml_function_coverage=1 00:06:15.970 --rc genhtml_legend=1 00:06:15.970 --rc geninfo_all_blocks=1 00:06:15.970 --rc geninfo_unexecuted_blocks=1 00:06:15.970 00:06:15.970 ' 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.970 --rc genhtml_branch_coverage=1 00:06:15.970 --rc genhtml_function_coverage=1 00:06:15.970 --rc genhtml_legend=1 00:06:15.970 --rc geninfo_all_blocks=1 00:06:15.970 --rc geninfo_unexecuted_blocks=1 00:06:15.970 00:06:15.970 ' 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.970 --rc genhtml_branch_coverage=1 00:06:15.970 --rc genhtml_function_coverage=1 00:06:15.970 --rc genhtml_legend=1 00:06:15.970 --rc geninfo_all_blocks=1 00:06:15.970 --rc geninfo_unexecuted_blocks=1 00:06:15.970 00:06:15.970 ' 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.970 --rc genhtml_branch_coverage=1 00:06:15.970 --rc genhtml_function_coverage=1 00:06:15.970 --rc genhtml_legend=1 00:06:15.970 --rc geninfo_all_blocks=1 00:06:15.970 --rc geninfo_unexecuted_blocks=1 00:06:15.970 00:06:15.970 ' 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.970 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.971 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.971 21:03:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:21.263 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:06:21.264 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:06:21.264 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:21.264 Found net devices under 0000:18:00.0: mlx_0_0 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:21.264 Found net devices under 0000:18:00.1: mlx_0_1 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:21.264 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:21.524 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:21.524 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:06:21.524 altname enp24s0f0np0 00:06:21.524 altname ens785f0np0 00:06:21.524 inet 192.168.100.8/24 scope global mlx_0_0 00:06:21.524 valid_lft forever preferred_lft forever 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:21.524 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:21.525 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:21.525 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:06:21.525 altname enp24s0f1np1 00:06:21.525 altname ens785f1np1 00:06:21.525 inet 192.168.100.9/24 scope global mlx_0_1 00:06:21.525 valid_lft forever preferred_lft forever 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:21.525 192.168.100.9' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:21.525 192.168.100.9' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:21.525 192.168.100.9' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3738915 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3738915 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3738915 ']' 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.525 21:03:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.525 [2024-11-26 21:03:08.889989] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:06:21.525 [2024-11-26 21:03:08.890033] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.525 [2024-11-26 21:03:08.950222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.784 [2024-11-26 21:03:08.988262] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:21.784 [2024-11-26 21:03:08.988294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:21.784 [2024-11-26 21:03:08.988300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.784 [2024-11-26 21:03:08.988306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.784 [2024-11-26 21:03:08.988310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:21.784 [2024-11-26 21:03:08.989322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.784 [2024-11-26 21:03:08.989325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.784 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.784 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:21.784 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:21.784 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.784 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.784 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.784 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:21.784 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.784 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.784 [2024-11-26 21:03:09.141000] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ecfe70/0x1ed4360) succeed. 00:06:21.784 [2024-11-26 21:03:09.148958] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ed13c0/0x1f15a00) succeed. 00:06:21.785 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.785 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:21.785 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.043 [2024-11-26 21:03:09.233297] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.043 NULL1 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.043 Delay0 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3739050 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:22.043 21:03:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:22.043 [2024-11-26 21:03:09.345743] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:23.946 21:03:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:23.946 21:03:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.946 21:03:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.323 NVMe io qpair process completion error 00:06:25.323 NVMe io qpair process completion error 00:06:25.323 NVMe io qpair process completion error 00:06:25.323 NVMe io qpair process completion error 00:06:25.323 NVMe io qpair process completion error 00:06:25.323 NVMe io qpair process completion error 00:06:25.323 21:03:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.323 21:03:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:25.323 21:03:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3739050 00:06:25.323 21:03:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:25.581 21:03:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:25.581 21:03:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3739050 00:06:25.581 21:03:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Read completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.151 Write completed with error (sct=0, sc=8) 00:06:26.151 starting I/O failed: -6 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 starting I/O failed: -6 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Read completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Write completed with error (sct=0, sc=8) 00:06:26.152 Initializing NVMe Controllers 00:06:26.152 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:26.152 Controller IO queue size 128, less than required. 00:06:26.152 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:26.152 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:26.152 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:26.152 Initialization complete. Launching workers. 00:06:26.152 ======================================================== 00:06:26.152 Latency(us) 00:06:26.152 Device Information : IOPS MiB/s Average min max 00:06:26.152 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.41 0.04 1594805.16 1000098.39 2979177.96 00:06:26.152 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.41 0.04 1596179.89 1000868.39 2980251.21 00:06:26.152 ======================================================== 00:06:26.152 Total : 160.82 0.08 1595492.53 1000098.39 2980251.21 00:06:26.152 00:06:26.152 [2024-11-26 21:03:13.433410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:06:26.152 [2024-11-26 21:03:13.433447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:06:26.152 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:26.152 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:26.152 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3739050 00:06:26.152 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:26.720 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:26.720 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3739050 00:06:26.720 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3739050) - No such process 00:06:26.720 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3739050 00:06:26.720 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:26.720 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3739050 00:06:26.720 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3739050 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.721 [2024-11-26 21:03:13.957552] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3739856 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:26.721 21:03:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.721 [2024-11-26 21:03:14.039394] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:27.290 21:03:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.290 21:03:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:27.290 21:03:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.550 21:03:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.550 21:03:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:27.550 21:03:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.117 21:03:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.117 21:03:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:28.117 21:03:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.685 21:03:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.685 21:03:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:28.685 21:03:15 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.306 21:03:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.306 21:03:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:29.306 21:03:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.621 21:03:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.621 21:03:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:29.621 21:03:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:30.189 21:03:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:30.189 21:03:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:30.189 21:03:17 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:30.757 21:03:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:30.757 21:03:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:30.757 21:03:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:31.326 21:03:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:31.326 21:03:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:31.326 21:03:18 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:31.584 21:03:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:31.584 21:03:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:31.584 21:03:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:32.152 21:03:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:32.152 21:03:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:32.152 21:03:19 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:32.721 21:03:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:32.721 21:03:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:32.721 21:03:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.293 21:03:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.293 21:03:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:33.293 21:03:20 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.860 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.860 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:33.860 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.860 Initializing NVMe Controllers 00:06:33.860 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:06:33.860 Controller IO queue size 128, less than required. 00:06:33.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:33.860 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:33.860 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:33.860 Initialization complete. Launching workers. 00:06:33.860 ======================================================== 00:06:33.860 Latency(us) 00:06:33.860 Device Information : IOPS MiB/s Average min max 00:06:33.860 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001405.26 1000049.05 1003660.80 00:06:33.860 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002539.15 1000188.33 1006147.20 00:06:33.860 ======================================================== 00:06:33.860 Total : 256.00 0.12 1001972.21 1000049.05 1006147.20 00:06:33.860 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3739856 00:06:34.117 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3739856) - No such process 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3739856 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:34.117 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:34.117 rmmod nvme_rdma 00:06:34.375 rmmod nvme_fabrics 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3738915 ']' 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3738915 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3738915 ']' 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3738915 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3738915 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3738915' 00:06:34.375 killing process with pid 3738915 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3738915 00:06:34.375 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3738915 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:34.634 00:06:34.634 real 0m18.760s 00:06:34.634 user 0m48.481s 00:06:34.634 sys 0m5.283s 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.634 ************************************ 00:06:34.634 END TEST nvmf_delete_subsystem 00:06:34.634 ************************************ 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:34.634 ************************************ 00:06:34.634 START TEST nvmf_host_management 00:06:34.634 ************************************ 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:06:34.634 * Looking for test storage... 00:06:34.634 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.634 21:03:21 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.634 --rc genhtml_branch_coverage=1 00:06:34.634 --rc genhtml_function_coverage=1 00:06:34.634 --rc genhtml_legend=1 00:06:34.634 --rc geninfo_all_blocks=1 00:06:34.634 --rc geninfo_unexecuted_blocks=1 00:06:34.634 00:06:34.634 ' 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.634 --rc genhtml_branch_coverage=1 00:06:34.634 --rc genhtml_function_coverage=1 00:06:34.634 --rc genhtml_legend=1 00:06:34.634 --rc geninfo_all_blocks=1 00:06:34.634 --rc geninfo_unexecuted_blocks=1 00:06:34.634 00:06:34.634 ' 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.634 --rc genhtml_branch_coverage=1 00:06:34.634 --rc genhtml_function_coverage=1 00:06:34.634 --rc genhtml_legend=1 00:06:34.634 --rc geninfo_all_blocks=1 00:06:34.634 --rc geninfo_unexecuted_blocks=1 00:06:34.634 00:06:34.634 ' 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.634 --rc genhtml_branch_coverage=1 00:06:34.634 --rc genhtml_function_coverage=1 00:06:34.634 --rc genhtml_legend=1 00:06:34.634 --rc geninfo_all_blocks=1 00:06:34.634 --rc geninfo_unexecuted_blocks=1 00:06:34.634 00:06:34.634 ' 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:34.634 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:34.634 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:34.892 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:34.892 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:34.892 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:34.892 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:34.892 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:34.892 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:34.892 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:34.892 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:34.893 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.893 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.893 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:34.893 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:34.893 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:34.893 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:34.893 21:03:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:06:40.163 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:06:40.163 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:40.163 Found net devices under 0000:18:00.0: mlx_0_0 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.163 21:03:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:40.163 Found net devices under 0000:18:00.1: mlx_0_1 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:40.163 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:40.164 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:40.164 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:06:40.164 altname enp24s0f0np0 00:06:40.164 altname ens785f0np0 00:06:40.164 inet 192.168.100.8/24 scope global mlx_0_0 00:06:40.164 valid_lft forever preferred_lft forever 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:40.164 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:40.164 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:06:40.164 altname enp24s0f1np1 00:06:40.164 altname ens785f1np1 00:06:40.164 inet 192.168.100.9/24 scope global mlx_0_1 00:06:40.164 valid_lft forever preferred_lft forever 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:40.164 192.168.100.9' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:40.164 192.168.100.9' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:40.164 192.168.100.9' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3744439 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3744439 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3744439 ']' 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.164 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:40.164 [2024-11-26 21:03:27.249184] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:06:40.164 [2024-11-26 21:03:27.249235] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.164 [2024-11-26 21:03:27.308826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.164 [2024-11-26 21:03:27.349014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:40.164 [2024-11-26 21:03:27.349053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:40.164 [2024-11-26 21:03:27.349060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:40.164 [2024-11-26 21:03:27.349065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:40.165 [2024-11-26 21:03:27.349069] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:40.165 [2024-11-26 21:03:27.350428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.165 [2024-11-26 21:03:27.350500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.165 [2024-11-26 21:03:27.350586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.165 [2024-11-26 21:03:27.350587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:40.165 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.165 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:40.165 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:40.165 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.165 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.165 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:40.165 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:40.165 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.165 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.165 [2024-11-26 21:03:27.512159] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x102e830/0x1032d20) succeed. 00:06:40.165 [2024-11-26 21:03:27.520904] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x102fec0/0x10743c0) succeed. 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.425 Malloc0 00:06:40.425 [2024-11-26 21:03:27.703741] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3744689 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3744689 /var/tmp/bdevperf.sock 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3744689 ']' 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:40.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:40.425 { 00:06:40.425 "params": { 00:06:40.425 "name": "Nvme$subsystem", 00:06:40.425 "trtype": "$TEST_TRANSPORT", 00:06:40.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:40.425 "adrfam": "ipv4", 00:06:40.425 "trsvcid": "$NVMF_PORT", 00:06:40.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:40.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:40.425 "hdgst": ${hdgst:-false}, 00:06:40.425 "ddgst": ${ddgst:-false} 00:06:40.425 }, 00:06:40.425 "method": "bdev_nvme_attach_controller" 00:06:40.425 } 00:06:40.425 EOF 00:06:40.425 )") 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:40.425 21:03:27 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:40.425 "params": { 00:06:40.425 "name": "Nvme0", 00:06:40.425 "trtype": "rdma", 00:06:40.425 "traddr": "192.168.100.8", 00:06:40.425 "adrfam": "ipv4", 00:06:40.425 "trsvcid": "4420", 00:06:40.425 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:40.425 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:40.425 "hdgst": false, 00:06:40.425 "ddgst": false 00:06:40.425 }, 00:06:40.425 "method": "bdev_nvme_attach_controller" 00:06:40.425 }' 00:06:40.425 [2024-11-26 21:03:27.795787] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:06:40.425 [2024-11-26 21:03:27.795831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744689 ] 00:06:40.425 [2024-11-26 21:03:27.855972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.684 [2024-11-26 21:03:27.894083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.684 Running I/O for 10 seconds... 00:06:40.684 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.684 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:40.684 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:40.684 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.684 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=171 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 171 -ge 100 ']' 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.944 21:03:28 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:41.883 296.00 IOPS, 18.50 MiB/s [2024-11-26T20:03:29.319Z] [2024-11-26 21:03:29.189063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106e4580 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106d4500 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106c4480 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106b4400 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106a4380 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010694300 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010684280 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010674200 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010664180 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010654100 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010644080 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010634000 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010623f80 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010613f00 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010603e80 len:0x10000 key:0x182a00 00:06:41.883 [2024-11-26 21:03:29.189298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000168d1e80 len:0x10000 key:0x182000 00:06:41.883 [2024-11-26 21:03:29.189311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000168c1e00 len:0x10000 key:0x182000 00:06:41.883 [2024-11-26 21:03:29.189327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000168b1d80 len:0x10000 key:0x182000 00:06:41.883 [2024-11-26 21:03:29.189341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000168a1d00 len:0x10000 key:0x182000 00:06:41.883 [2024-11-26 21:03:29.189354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016891c80 len:0x10000 key:0x182000 00:06:41.883 [2024-11-26 21:03:29.189368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.883 [2024-11-26 21:03:29.189375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016881c00 len:0x10000 key:0x182000 00:06:41.884 [2024-11-26 21:03:29.189382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016871b80 len:0x10000 key:0x182000 00:06:41.884 [2024-11-26 21:03:29.189395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016861b00 len:0x10000 key:0x182000 00:06:41.884 [2024-11-26 21:03:29.189408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016851a80 len:0x10000 key:0x182000 00:06:41.884 [2024-11-26 21:03:29.189421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016841a00 len:0x10000 key:0x182000 00:06:41.884 [2024-11-26 21:03:29.189435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016831980 len:0x10000 key:0x182000 00:06:41.884 [2024-11-26 21:03:29.189448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016821900 len:0x10000 key:0x182000 00:06:41.884 [2024-11-26 21:03:29.189462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016811880 len:0x10000 key:0x182000 00:06:41.884 [2024-11-26 21:03:29.189476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016801800 len:0x10000 key:0x182000 00:06:41.884 [2024-11-26 21:03:29.189490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000166eff80 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000166dff00 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000166cfe80 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000166bfe00 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000166afd80 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001669fd00 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001668fc80 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001667fc00 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001666fb80 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001665fb00 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001664fa80 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001663fa00 len:0x10000 key:0x182100 00:06:41.884 [2024-11-26 21:03:29.189654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008893000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088b4000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088d5000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000088f6000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c2f000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008c0e000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008ac4000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200008aa3000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3e7000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3c6000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3a5000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a384000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a363000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a342000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a321000 len:0x10000 key:0x182900 00:06:41.884 [2024-11-26 21:03:29.189856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.884 [2024-11-26 21:03:29.189864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a300000 len:0x10000 key:0x182900 00:06:41.885 [2024-11-26 21:03:29.189870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.885 [2024-11-26 21:03:29.189877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6ff000 len:0x10000 key:0x182900 00:06:41.885 [2024-11-26 21:03:29.189882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.885 [2024-11-26 21:03:29.189890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6de000 len:0x10000 key:0x182900 00:06:41.885 [2024-11-26 21:03:29.189895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.885 [2024-11-26 21:03:29.189903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6bd000 len:0x10000 key:0x182900 00:06:41.885 [2024-11-26 21:03:29.189909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.885 [2024-11-26 21:03:29.189916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a69c000 len:0x10000 key:0x182900 00:06:41.885 [2024-11-26 21:03:29.189922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.885 [2024-11-26 21:03:29.189931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a67b000 len:0x10000 key:0x182900 00:06:41.885 [2024-11-26 21:03:29.189937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.885 [2024-11-26 21:03:29.189945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a65a000 len:0x10000 key:0x182900 00:06:41.885 [2024-11-26 21:03:29.189951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.885 [2024-11-26 21:03:29.189958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a639000 len:0x10000 key:0x182900 00:06:41.885 [2024-11-26 21:03:29.189964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:8660a000 sqhd:7250 p:0 m:0 dnr:0 00:06:41.885 [2024-11-26 21:03:29.192688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:41.885 task offset: 40832 on job bdev=Nvme0n1 fails 00:06:41.885 00:06:41.885 Latency(us) 00:06:41.885 [2024-11-26T20:03:29.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.885 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:41.885 Job: Nvme0n1 ended in about 1.12 seconds with error 00:06:41.885 Verification LBA range: start 0x0 length 0x400 00:06:41.885 Nvme0n1 : 1.12 264.23 16.51 57.13 0.00 197623.42 2123.85 1019060.53 00:06:41.885 [2024-11-26T20:03:29.321Z] =================================================================================================================== 00:06:41.885 [2024-11-26T20:03:29.321Z] Total : 264.23 16.51 57.13 0.00 197623.42 2123.85 1019060.53 00:06:41.885 [2024-11-26 21:03:29.194198] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3744689 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:41.885 { 00:06:41.885 "params": { 00:06:41.885 "name": "Nvme$subsystem", 00:06:41.885 "trtype": "$TEST_TRANSPORT", 00:06:41.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:41.885 "adrfam": "ipv4", 00:06:41.885 "trsvcid": "$NVMF_PORT", 00:06:41.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:41.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:41.885 "hdgst": ${hdgst:-false}, 00:06:41.885 "ddgst": ${ddgst:-false} 00:06:41.885 }, 00:06:41.885 "method": "bdev_nvme_attach_controller" 00:06:41.885 } 00:06:41.885 EOF 00:06:41.885 )") 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:41.885 21:03:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:41.885 "params": { 00:06:41.885 "name": "Nvme0", 00:06:41.885 "trtype": "rdma", 00:06:41.885 "traddr": "192.168.100.8", 00:06:41.885 "adrfam": "ipv4", 00:06:41.885 "trsvcid": "4420", 00:06:41.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:41.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:41.885 "hdgst": false, 00:06:41.885 "ddgst": false 00:06:41.885 }, 00:06:41.885 "method": "bdev_nvme_attach_controller" 00:06:41.885 }' 00:06:41.885 [2024-11-26 21:03:29.243996] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:06:41.885 [2024-11-26 21:03:29.244034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744969 ] 00:06:41.885 [2024-11-26 21:03:29.302171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.144 [2024-11-26 21:03:29.340144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.144 Running I/O for 1 seconds... 00:06:43.522 3264.00 IOPS, 204.00 MiB/s 00:06:43.522 Latency(us) 00:06:43.522 [2024-11-26T20:03:30.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.522 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:43.523 Verification LBA range: start 0x0 length 0x400 00:06:43.523 Nvme0n1 : 1.01 3306.15 206.63 0.00 0.00 18978.84 558.27 31263.10 00:06:43.523 [2024-11-26T20:03:30.959Z] =================================================================================================================== 00:06:43.523 [2024-11-26T20:03:30.959Z] Total : 3306.15 206.63 0.00 0.00 18978.84 558.27 31263.10 00:06:43.523 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3744689 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:43.523 rmmod nvme_rdma 00:06:43.523 rmmod nvme_fabrics 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3744439 ']' 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3744439 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3744439 ']' 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3744439 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3744439 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3744439' 00:06:43.523 killing process with pid 3744439 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3744439 00:06:43.523 21:03:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3744439 00:06:43.783 [2024-11-26 21:03:31.034142] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:43.783 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:43.783 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:06:43.783 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:43.783 00:06:43.783 real 0m9.156s 00:06:43.783 user 0m18.779s 00:06:43.783 sys 0m4.776s 00:06:43.783 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.783 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:43.783 ************************************ 00:06:43.783 END TEST nvmf_host_management 00:06:43.783 ************************************ 00:06:43.783 21:03:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:06:43.783 21:03:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:43.783 21:03:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.783 21:03:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:43.783 ************************************ 00:06:43.783 START TEST nvmf_lvol 00:06:43.783 ************************************ 00:06:43.783 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:06:43.783 * Looking for test storage... 00:06:44.043 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.043 --rc genhtml_branch_coverage=1 00:06:44.043 --rc genhtml_function_coverage=1 00:06:44.043 --rc genhtml_legend=1 00:06:44.043 --rc geninfo_all_blocks=1 00:06:44.043 --rc geninfo_unexecuted_blocks=1 00:06:44.043 00:06:44.043 ' 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.043 --rc genhtml_branch_coverage=1 00:06:44.043 --rc genhtml_function_coverage=1 00:06:44.043 --rc genhtml_legend=1 00:06:44.043 --rc geninfo_all_blocks=1 00:06:44.043 --rc geninfo_unexecuted_blocks=1 00:06:44.043 00:06:44.043 ' 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.043 --rc genhtml_branch_coverage=1 00:06:44.043 --rc genhtml_function_coverage=1 00:06:44.043 --rc genhtml_legend=1 00:06:44.043 --rc geninfo_all_blocks=1 00:06:44.043 --rc geninfo_unexecuted_blocks=1 00:06:44.043 00:06:44.043 ' 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.043 --rc genhtml_branch_coverage=1 00:06:44.043 --rc genhtml_function_coverage=1 00:06:44.043 --rc genhtml_legend=1 00:06:44.043 --rc geninfo_all_blocks=1 00:06:44.043 --rc geninfo_unexecuted_blocks=1 00:06:44.043 00:06:44.043 ' 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.043 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.044 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:44.044 21:03:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:49.318 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:06:49.319 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:06:49.319 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:49.319 Found net devices under 0000:18:00.0: mlx_0_0 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:49.319 Found net devices under 0000:18:00.1: mlx_0_1 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:49.319 21:03:35 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:49.319 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:49.319 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:49.319 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:49.319 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:49.319 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:06:49.319 altname enp24s0f0np0 00:06:49.319 altname ens785f0np0 00:06:49.319 inet 192.168.100.8/24 scope global mlx_0_0 00:06:49.319 valid_lft forever preferred_lft forever 00:06:49.319 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:49.319 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:49.320 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:49.320 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:06:49.320 altname enp24s0f1np1 00:06:49.320 altname ens785f1np1 00:06:49.320 inet 192.168.100.9/24 scope global mlx_0_1 00:06:49.320 valid_lft forever preferred_lft forever 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:06:49.320 192.168.100.9' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:06:49.320 192.168.100.9' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:06:49.320 192.168.100.9' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3748328 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3748328 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3748328 ']' 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.320 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.320 [2024-11-26 21:03:36.172743] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:06:49.320 [2024-11-26 21:03:36.172789] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.320 [2024-11-26 21:03:36.231314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.320 [2024-11-26 21:03:36.270780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:49.320 [2024-11-26 21:03:36.270813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:49.320 [2024-11-26 21:03:36.270820] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:49.320 [2024-11-26 21:03:36.270826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:49.320 [2024-11-26 21:03:36.270831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:49.320 [2024-11-26 21:03:36.272094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.321 [2024-11-26 21:03:36.272192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.321 [2024-11-26 21:03:36.272193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.321 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.321 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:49.321 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:49.321 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:49.321 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.321 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:49.321 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:49.321 [2024-11-26 21:03:36.577582] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x164d9f0/0x1651ee0) succeed. 00:06:49.321 [2024-11-26 21:03:36.585488] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x164efe0/0x1693580) succeed. 00:06:49.321 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:49.580 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:49.580 21:03:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:49.839 21:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:49.839 21:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:49.839 21:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:50.098 21:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=659f04b4-ad32-4e6a-948f-b12d9c2ba9c5 00:06:50.098 21:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 659f04b4-ad32-4e6a-948f-b12d9c2ba9c5 lvol 20 00:06:50.357 21:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5658d8e8-c0a8-4a88-870f-0c2ee982bf57 00:06:50.357 21:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:50.615 21:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5658d8e8-c0a8-4a88-870f-0c2ee982bf57 00:06:50.615 21:03:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:50.926 [2024-11-26 21:03:38.152831] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:50.926 21:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:51.183 21:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3748839 00:06:51.183 21:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:51.183 21:03:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:52.120 21:03:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5658d8e8-c0a8-4a88-870f-0c2ee982bf57 MY_SNAPSHOT 00:06:52.379 21:03:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2eba12ed-5f9b-4ea5-bb45-486d147f90dc 00:06:52.379 21:03:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5658d8e8-c0a8-4a88-870f-0c2ee982bf57 30 00:06:52.379 21:03:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2eba12ed-5f9b-4ea5-bb45-486d147f90dc MY_CLONE 00:06:52.638 21:03:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d543eb23-0b85-4fd6-bfd6-eedbb4308e73 00:06:52.638 21:03:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d543eb23-0b85-4fd6-bfd6-eedbb4308e73 00:06:52.897 21:03:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3748839 00:07:03.022 Initializing NVMe Controllers 00:07:03.022 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:03.022 Controller IO queue size 128, less than required. 00:07:03.022 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:03.022 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:03.022 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:03.022 Initialization complete. Launching workers. 00:07:03.022 ======================================================== 00:07:03.022 Latency(us) 00:07:03.022 Device Information : IOPS MiB/s Average min max 00:07:03.022 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17208.70 67.22 7440.06 2019.23 45522.56 00:07:03.022 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17115.00 66.86 7480.56 3070.51 50409.48 00:07:03.022 ======================================================== 00:07:03.022 Total : 34323.70 134.08 7460.26 2019.23 50409.48 00:07:03.022 00:07:03.022 21:03:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:03.022 21:03:49 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5658d8e8-c0a8-4a88-870f-0c2ee982bf57 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 659f04b4-ad32-4e6a-948f-b12d9c2ba9c5 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:03.022 rmmod nvme_rdma 00:07:03.022 rmmod nvme_fabrics 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3748328 ']' 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3748328 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3748328 ']' 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3748328 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3748328 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3748328' 00:07:03.022 killing process with pid 3748328 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3748328 00:07:03.022 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3748328 00:07:03.282 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:03.282 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:03.282 00:07:03.282 real 0m19.534s 00:07:03.282 user 1m8.843s 00:07:03.283 sys 0m4.632s 00:07:03.283 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.283 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:03.283 ************************************ 00:07:03.283 END TEST nvmf_lvol 00:07:03.283 ************************************ 00:07:03.283 21:03:50 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:03.283 21:03:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:03.283 21:03:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.283 21:03:50 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:03.543 ************************************ 00:07:03.543 START TEST nvmf_lvs_grow 00:07:03.543 ************************************ 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:07:03.543 * Looking for test storage... 00:07:03.543 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:03.543 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.544 --rc genhtml_branch_coverage=1 00:07:03.544 --rc genhtml_function_coverage=1 00:07:03.544 --rc genhtml_legend=1 00:07:03.544 --rc geninfo_all_blocks=1 00:07:03.544 --rc geninfo_unexecuted_blocks=1 00:07:03.544 00:07:03.544 ' 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.544 --rc genhtml_branch_coverage=1 00:07:03.544 --rc genhtml_function_coverage=1 00:07:03.544 --rc genhtml_legend=1 00:07:03.544 --rc geninfo_all_blocks=1 00:07:03.544 --rc geninfo_unexecuted_blocks=1 00:07:03.544 00:07:03.544 ' 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.544 --rc genhtml_branch_coverage=1 00:07:03.544 --rc genhtml_function_coverage=1 00:07:03.544 --rc genhtml_legend=1 00:07:03.544 --rc geninfo_all_blocks=1 00:07:03.544 --rc geninfo_unexecuted_blocks=1 00:07:03.544 00:07:03.544 ' 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.544 --rc genhtml_branch_coverage=1 00:07:03.544 --rc genhtml_function_coverage=1 00:07:03.544 --rc genhtml_legend=1 00:07:03.544 --rc geninfo_all_blocks=1 00:07:03.544 --rc geninfo_unexecuted_blocks=1 00:07:03.544 00:07:03.544 ' 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:03.544 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:03.544 21:03:50 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:08.824 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:08.825 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:08.825 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:08.825 Found net devices under 0000:18:00.0: mlx_0_0 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:08.825 Found net devices under 0000:18:00.1: mlx_0_1 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:08.825 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:08.825 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:08.825 altname enp24s0f0np0 00:07:08.825 altname ens785f0np0 00:07:08.825 inet 192.168.100.8/24 scope global mlx_0_0 00:07:08.825 valid_lft forever preferred_lft forever 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:08.825 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:08.826 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:08.826 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:08.826 altname enp24s0f1np1 00:07:08.826 altname ens785f1np1 00:07:08.826 inet 192.168.100.9/24 scope global mlx_0_1 00:07:08.826 valid_lft forever preferred_lft forever 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:08.826 192.168.100.9' 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:08.826 192.168.100.9' 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:08.826 192.168.100.9' 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:08.826 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:09.086 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:09.086 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:09.086 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:09.086 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.086 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3754291 00:07:09.086 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3754291 00:07:09.086 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3754291 ']' 00:07:09.086 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.086 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.087 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.087 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.087 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.087 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:09.087 [2024-11-26 21:03:56.314564] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:07:09.087 [2024-11-26 21:03:56.314608] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.087 [2024-11-26 21:03:56.373423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.087 [2024-11-26 21:03:56.411427] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.087 [2024-11-26 21:03:56.411460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.087 [2024-11-26 21:03:56.411466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.087 [2024-11-26 21:03:56.411472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.087 [2024-11-26 21:03:56.411476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.087 [2024-11-26 21:03:56.411959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.087 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.087 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:09.087 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:09.087 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:09.087 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.346 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.346 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:09.346 [2024-11-26 21:03:56.715829] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18b82e0/0x18bc7d0) succeed. 00:07:09.346 [2024-11-26 21:03:56.724370] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18b9790/0x18fde70) succeed. 00:07:09.346 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:09.346 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.346 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.346 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 ************************************ 00:07:09.605 START TEST lvs_grow_clean 00:07:09.605 ************************************ 00:07:09.605 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:09.605 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:09.605 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:09.605 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:09.606 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:09.606 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:09.606 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:09.606 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.606 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.606 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:09.606 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:09.606 21:03:56 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:09.864 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:09.864 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:09.864 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:10.123 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:10.123 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:10.123 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ef4fc02a-924a-4203-b4ea-211dd865b4be lvol 150 00:07:10.123 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ba4244ed-e6e5-49cc-a416-1760d895d289 00:07:10.123 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.123 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:10.383 [2024-11-26 21:03:57.665987] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:10.383 [2024-11-26 21:03:57.666035] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:10.383 true 00:07:10.383 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:10.383 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:10.642 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:10.642 21:03:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:10.642 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ba4244ed-e6e5-49cc-a416-1760d895d289 00:07:10.901 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:10.902 [2024-11-26 21:03:58.336131] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:11.160 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:11.160 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3754775 00:07:11.160 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:11.160 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:11.160 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3754775 /var/tmp/bdevperf.sock 00:07:11.160 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3754775 ']' 00:07:11.160 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:11.160 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.160 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:11.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:11.160 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.160 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:11.160 [2024-11-26 21:03:58.561901] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:07:11.160 [2024-11-26 21:03:58.561945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754775 ] 00:07:11.420 [2024-11-26 21:03:58.618842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.420 [2024-11-26 21:03:58.655694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.420 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.420 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:11.420 21:03:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:11.679 Nvme0n1 00:07:11.679 21:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:11.938 [ 00:07:11.938 { 00:07:11.939 "name": "Nvme0n1", 00:07:11.939 "aliases": [ 00:07:11.939 "ba4244ed-e6e5-49cc-a416-1760d895d289" 00:07:11.939 ], 00:07:11.939 "product_name": "NVMe disk", 00:07:11.939 "block_size": 4096, 00:07:11.939 "num_blocks": 38912, 00:07:11.939 "uuid": "ba4244ed-e6e5-49cc-a416-1760d895d289", 00:07:11.939 "numa_id": 0, 00:07:11.939 "assigned_rate_limits": { 00:07:11.939 "rw_ios_per_sec": 0, 00:07:11.939 "rw_mbytes_per_sec": 0, 00:07:11.939 "r_mbytes_per_sec": 0, 00:07:11.939 "w_mbytes_per_sec": 0 00:07:11.939 }, 00:07:11.939 "claimed": false, 00:07:11.939 "zoned": false, 00:07:11.939 "supported_io_types": { 00:07:11.939 "read": true, 00:07:11.939 "write": true, 00:07:11.939 "unmap": true, 00:07:11.939 "flush": true, 00:07:11.939 "reset": true, 00:07:11.939 "nvme_admin": true, 00:07:11.939 "nvme_io": true, 00:07:11.939 "nvme_io_md": false, 00:07:11.939 "write_zeroes": true, 00:07:11.939 "zcopy": false, 00:07:11.939 "get_zone_info": false, 00:07:11.939 "zone_management": false, 00:07:11.939 "zone_append": false, 00:07:11.939 "compare": true, 00:07:11.939 "compare_and_write": true, 00:07:11.939 "abort": true, 00:07:11.939 "seek_hole": false, 00:07:11.939 "seek_data": false, 00:07:11.939 "copy": true, 00:07:11.939 "nvme_iov_md": false 00:07:11.939 }, 00:07:11.939 "memory_domains": [ 00:07:11.939 { 00:07:11.939 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:11.939 "dma_device_type": 0 00:07:11.939 } 00:07:11.939 ], 00:07:11.939 "driver_specific": { 00:07:11.939 "nvme": [ 00:07:11.939 { 00:07:11.939 "trid": { 00:07:11.939 "trtype": "RDMA", 00:07:11.939 "adrfam": "IPv4", 00:07:11.939 "traddr": "192.168.100.8", 00:07:11.939 "trsvcid": "4420", 00:07:11.939 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:11.939 }, 00:07:11.939 "ctrlr_data": { 00:07:11.939 "cntlid": 1, 00:07:11.939 "vendor_id": "0x8086", 00:07:11.939 "model_number": "SPDK bdev Controller", 00:07:11.939 "serial_number": "SPDK0", 00:07:11.939 "firmware_revision": "25.01", 00:07:11.939 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:11.939 "oacs": { 00:07:11.939 "security": 0, 00:07:11.939 "format": 0, 00:07:11.939 "firmware": 0, 00:07:11.939 "ns_manage": 0 00:07:11.939 }, 00:07:11.939 "multi_ctrlr": true, 00:07:11.939 "ana_reporting": false 00:07:11.939 }, 00:07:11.939 "vs": { 00:07:11.939 "nvme_version": "1.3" 00:07:11.939 }, 00:07:11.939 "ns_data": { 00:07:11.939 "id": 1, 00:07:11.939 "can_share": true 00:07:11.939 } 00:07:11.939 } 00:07:11.939 ], 00:07:11.939 "mp_policy": "active_passive" 00:07:11.939 } 00:07:11.939 } 00:07:11.939 ] 00:07:11.939 21:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3754874 00:07:11.939 21:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:11.939 21:03:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:11.939 Running I/O for 10 seconds... 00:07:12.873 Latency(us) 00:07:12.873 [2024-11-26T20:04:00.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:12.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.873 Nvme0n1 : 1.00 37408.00 146.12 0.00 0.00 0.00 0.00 0.00 00:07:12.873 [2024-11-26T20:04:00.309Z] =================================================================================================================== 00:07:12.873 [2024-11-26T20:04:00.309Z] Total : 37408.00 146.12 0.00 0.00 0.00 0.00 0.00 00:07:12.873 00:07:13.808 21:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:14.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.066 Nvme0n1 : 2.00 37824.00 147.75 0.00 0.00 0.00 0.00 0.00 00:07:14.066 [2024-11-26T20:04:01.502Z] =================================================================================================================== 00:07:14.066 [2024-11-26T20:04:01.502Z] Total : 37824.00 147.75 0.00 0.00 0.00 0.00 0.00 00:07:14.066 00:07:14.066 true 00:07:14.066 21:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:14.066 21:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:14.325 21:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:14.325 21:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:14.325 21:04:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3754874 00:07:14.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.891 Nvme0n1 : 3.00 37984.67 148.38 0.00 0.00 0.00 0.00 0.00 00:07:14.891 [2024-11-26T20:04:02.327Z] =================================================================================================================== 00:07:14.891 [2024-11-26T20:04:02.327Z] Total : 37984.67 148.38 0.00 0.00 0.00 0.00 0.00 00:07:14.891 00:07:15.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.828 Nvme0n1 : 4.00 38143.75 149.00 0.00 0.00 0.00 0.00 0.00 00:07:15.828 [2024-11-26T20:04:03.264Z] =================================================================================================================== 00:07:15.828 [2024-11-26T20:04:03.264Z] Total : 38143.75 149.00 0.00 0.00 0.00 0.00 0.00 00:07:15.828 00:07:17.204 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.204 Nvme0n1 : 5.00 38239.40 149.37 0.00 0.00 0.00 0.00 0.00 00:07:17.204 [2024-11-26T20:04:04.640Z] =================================================================================================================== 00:07:17.204 [2024-11-26T20:04:04.640Z] Total : 38239.40 149.37 0.00 0.00 0.00 0.00 0.00 00:07:17.204 00:07:18.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.139 Nvme0n1 : 6.00 38303.33 149.62 0.00 0.00 0.00 0.00 0.00 00:07:18.139 [2024-11-26T20:04:05.575Z] =================================================================================================================== 00:07:18.139 [2024-11-26T20:04:05.575Z] Total : 38303.33 149.62 0.00 0.00 0.00 0.00 0.00 00:07:18.139 00:07:19.076 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.076 Nvme0n1 : 7.00 38363.71 149.86 0.00 0.00 0.00 0.00 0.00 00:07:19.076 [2024-11-26T20:04:06.512Z] =================================================================================================================== 00:07:19.076 [2024-11-26T20:04:06.512Z] Total : 38363.71 149.86 0.00 0.00 0.00 0.00 0.00 00:07:19.076 00:07:20.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.014 Nvme0n1 : 8.00 38400.38 150.00 0.00 0.00 0.00 0.00 0.00 00:07:20.014 [2024-11-26T20:04:07.450Z] =================================================================================================================== 00:07:20.014 [2024-11-26T20:04:07.450Z] Total : 38400.38 150.00 0.00 0.00 0.00 0.00 0.00 00:07:20.014 00:07:20.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.961 Nvme0n1 : 9.00 38420.89 150.08 0.00 0.00 0.00 0.00 0.00 00:07:20.961 [2024-11-26T20:04:08.397Z] =================================================================================================================== 00:07:20.961 [2024-11-26T20:04:08.397Z] Total : 38420.89 150.08 0.00 0.00 0.00 0.00 0.00 00:07:20.961 00:07:21.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.905 Nvme0n1 : 10.00 38447.50 150.19 0.00 0.00 0.00 0.00 0.00 00:07:21.905 [2024-11-26T20:04:09.341Z] =================================================================================================================== 00:07:21.905 [2024-11-26T20:04:09.341Z] Total : 38447.50 150.19 0.00 0.00 0.00 0.00 0.00 00:07:21.905 00:07:21.905 00:07:21.905 Latency(us) 00:07:21.905 [2024-11-26T20:04:09.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.905 Nvme0n1 : 10.00 38447.56 150.19 0.00 0.00 3326.59 2220.94 11747.93 00:07:21.905 [2024-11-26T20:04:09.341Z] =================================================================================================================== 00:07:21.905 [2024-11-26T20:04:09.341Z] Total : 38447.56 150.19 0.00 0.00 3326.59 2220.94 11747.93 00:07:21.905 { 00:07:21.905 "results": [ 00:07:21.905 { 00:07:21.905 "job": "Nvme0n1", 00:07:21.905 "core_mask": "0x2", 00:07:21.905 "workload": "randwrite", 00:07:21.905 "status": "finished", 00:07:21.905 "queue_depth": 128, 00:07:21.905 "io_size": 4096, 00:07:21.905 "runtime": 10.004302, 00:07:21.905 "iops": 38447.55985974833, 00:07:21.905 "mibps": 150.18578070214193, 00:07:21.905 "io_failed": 0, 00:07:21.905 "io_timeout": 0, 00:07:21.905 "avg_latency_us": 3326.593266737324, 00:07:21.905 "min_latency_us": 2220.9422222222224, 00:07:21.905 "max_latency_us": 11747.934814814815 00:07:21.905 } 00:07:21.905 ], 00:07:21.905 "core_count": 1 00:07:21.905 } 00:07:21.905 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3754775 00:07:21.905 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3754775 ']' 00:07:21.905 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3754775 00:07:21.905 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:21.905 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.905 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3754775 00:07:22.164 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:22.164 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:22.164 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3754775' 00:07:22.164 killing process with pid 3754775 00:07:22.164 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3754775 00:07:22.164 Received shutdown signal, test time was about 10.000000 seconds 00:07:22.164 00:07:22.164 Latency(us) 00:07:22.164 [2024-11-26T20:04:09.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.164 [2024-11-26T20:04:09.600Z] =================================================================================================================== 00:07:22.164 [2024-11-26T20:04:09.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:22.164 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3754775 00:07:22.164 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:22.423 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:22.681 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:22.681 21:04:09 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:22.681 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:22.681 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:22.681 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:22.940 [2024-11-26 21:04:10.206895] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:22.941 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:23.201 request: 00:07:23.201 { 00:07:23.201 "uuid": "ef4fc02a-924a-4203-b4ea-211dd865b4be", 00:07:23.201 "method": "bdev_lvol_get_lvstores", 00:07:23.201 "req_id": 1 00:07:23.201 } 00:07:23.201 Got JSON-RPC error response 00:07:23.201 response: 00:07:23.201 { 00:07:23.201 "code": -19, 00:07:23.201 "message": "No such device" 00:07:23.201 } 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:23.201 aio_bdev 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ba4244ed-e6e5-49cc-a416-1760d895d289 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ba4244ed-e6e5-49cc-a416-1760d895d289 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:23.201 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:23.461 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ba4244ed-e6e5-49cc-a416-1760d895d289 -t 2000 00:07:23.721 [ 00:07:23.721 { 00:07:23.721 "name": "ba4244ed-e6e5-49cc-a416-1760d895d289", 00:07:23.721 "aliases": [ 00:07:23.721 "lvs/lvol" 00:07:23.721 ], 00:07:23.721 "product_name": "Logical Volume", 00:07:23.721 "block_size": 4096, 00:07:23.721 "num_blocks": 38912, 00:07:23.721 "uuid": "ba4244ed-e6e5-49cc-a416-1760d895d289", 00:07:23.721 "assigned_rate_limits": { 00:07:23.721 "rw_ios_per_sec": 0, 00:07:23.721 "rw_mbytes_per_sec": 0, 00:07:23.721 "r_mbytes_per_sec": 0, 00:07:23.721 "w_mbytes_per_sec": 0 00:07:23.721 }, 00:07:23.721 "claimed": false, 00:07:23.721 "zoned": false, 00:07:23.721 "supported_io_types": { 00:07:23.721 "read": true, 00:07:23.721 "write": true, 00:07:23.721 "unmap": true, 00:07:23.721 "flush": false, 00:07:23.721 "reset": true, 00:07:23.721 "nvme_admin": false, 00:07:23.721 "nvme_io": false, 00:07:23.721 "nvme_io_md": false, 00:07:23.721 "write_zeroes": true, 00:07:23.721 "zcopy": false, 00:07:23.721 "get_zone_info": false, 00:07:23.721 "zone_management": false, 00:07:23.721 "zone_append": false, 00:07:23.721 "compare": false, 00:07:23.721 "compare_and_write": false, 00:07:23.721 "abort": false, 00:07:23.721 "seek_hole": true, 00:07:23.721 "seek_data": true, 00:07:23.721 "copy": false, 00:07:23.721 "nvme_iov_md": false 00:07:23.721 }, 00:07:23.721 "driver_specific": { 00:07:23.721 "lvol": { 00:07:23.721 "lvol_store_uuid": "ef4fc02a-924a-4203-b4ea-211dd865b4be", 00:07:23.721 "base_bdev": "aio_bdev", 00:07:23.721 "thin_provision": false, 00:07:23.721 "num_allocated_clusters": 38, 00:07:23.721 "snapshot": false, 00:07:23.721 "clone": false, 00:07:23.721 "esnap_clone": false 00:07:23.721 } 00:07:23.721 } 00:07:23.721 } 00:07:23.721 ] 00:07:23.721 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:23.721 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:23.721 21:04:10 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:23.721 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:23.721 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:23.721 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:23.980 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:23.980 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ba4244ed-e6e5-49cc-a416-1760d895d289 00:07:24.239 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ef4fc02a-924a-4203-b4ea-211dd865b4be 00:07:24.239 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.498 00:07:24.498 real 0m15.041s 00:07:24.498 user 0m15.036s 00:07:24.498 sys 0m0.912s 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:24.498 ************************************ 00:07:24.498 END TEST lvs_grow_clean 00:07:24.498 ************************************ 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:24.498 ************************************ 00:07:24.498 START TEST lvs_grow_dirty 00:07:24.498 ************************************ 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:24.498 21:04:11 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:24.757 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:24.757 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:25.016 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:25.016 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:25.016 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:25.274 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:25.274 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:25.274 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b5efb99d-49aa-4616-9862-4f674fa4cdfe lvol 150 00:07:25.274 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4ab2e424-c32f-4bf4-aa79-f59075d1f918 00:07:25.274 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.274 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:25.534 [2024-11-26 21:04:12.794931] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:25.534 [2024-11-26 21:04:12.794975] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:25.534 true 00:07:25.534 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:25.534 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:25.793 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:25.793 21:04:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:25.793 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4ab2e424-c32f-4bf4-aa79-f59075d1f918 00:07:26.052 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:26.052 [2024-11-26 21:04:13.477086] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:26.312 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:26.312 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3757615 00:07:26.312 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:26.312 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:26.312 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3757615 /var/tmp/bdevperf.sock 00:07:26.312 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3757615 ']' 00:07:26.312 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:26.312 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.312 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:26.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:26.312 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.312 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:26.312 [2024-11-26 21:04:13.700220] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:07:26.312 [2024-11-26 21:04:13.700269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757615 ] 00:07:26.571 [2024-11-26 21:04:13.760206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.572 [2024-11-26 21:04:13.799198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.572 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.572 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:26.572 21:04:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:26.830 Nvme0n1 00:07:26.830 21:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:27.090 [ 00:07:27.090 { 00:07:27.090 "name": "Nvme0n1", 00:07:27.090 "aliases": [ 00:07:27.090 "4ab2e424-c32f-4bf4-aa79-f59075d1f918" 00:07:27.090 ], 00:07:27.090 "product_name": "NVMe disk", 00:07:27.090 "block_size": 4096, 00:07:27.090 "num_blocks": 38912, 00:07:27.090 "uuid": "4ab2e424-c32f-4bf4-aa79-f59075d1f918", 00:07:27.090 "numa_id": 0, 00:07:27.090 "assigned_rate_limits": { 00:07:27.090 "rw_ios_per_sec": 0, 00:07:27.090 "rw_mbytes_per_sec": 0, 00:07:27.090 "r_mbytes_per_sec": 0, 00:07:27.090 "w_mbytes_per_sec": 0 00:07:27.090 }, 00:07:27.090 "claimed": false, 00:07:27.090 "zoned": false, 00:07:27.090 "supported_io_types": { 00:07:27.090 "read": true, 00:07:27.090 "write": true, 00:07:27.090 "unmap": true, 00:07:27.090 "flush": true, 00:07:27.090 "reset": true, 00:07:27.090 "nvme_admin": true, 00:07:27.090 "nvme_io": true, 00:07:27.091 "nvme_io_md": false, 00:07:27.091 "write_zeroes": true, 00:07:27.091 "zcopy": false, 00:07:27.091 "get_zone_info": false, 00:07:27.091 "zone_management": false, 00:07:27.091 "zone_append": false, 00:07:27.091 "compare": true, 00:07:27.091 "compare_and_write": true, 00:07:27.091 "abort": true, 00:07:27.091 "seek_hole": false, 00:07:27.091 "seek_data": false, 00:07:27.091 "copy": true, 00:07:27.091 "nvme_iov_md": false 00:07:27.091 }, 00:07:27.091 "memory_domains": [ 00:07:27.091 { 00:07:27.091 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:07:27.091 "dma_device_type": 0 00:07:27.091 } 00:07:27.091 ], 00:07:27.091 "driver_specific": { 00:07:27.091 "nvme": [ 00:07:27.091 { 00:07:27.091 "trid": { 00:07:27.091 "trtype": "RDMA", 00:07:27.091 "adrfam": "IPv4", 00:07:27.091 "traddr": "192.168.100.8", 00:07:27.091 "trsvcid": "4420", 00:07:27.091 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:27.091 }, 00:07:27.091 "ctrlr_data": { 00:07:27.091 "cntlid": 1, 00:07:27.091 "vendor_id": "0x8086", 00:07:27.091 "model_number": "SPDK bdev Controller", 00:07:27.091 "serial_number": "SPDK0", 00:07:27.091 "firmware_revision": "25.01", 00:07:27.091 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:27.091 "oacs": { 00:07:27.091 "security": 0, 00:07:27.091 "format": 0, 00:07:27.091 "firmware": 0, 00:07:27.091 "ns_manage": 0 00:07:27.091 }, 00:07:27.091 "multi_ctrlr": true, 00:07:27.091 "ana_reporting": false 00:07:27.091 }, 00:07:27.091 "vs": { 00:07:27.091 "nvme_version": "1.3" 00:07:27.091 }, 00:07:27.091 "ns_data": { 00:07:27.091 "id": 1, 00:07:27.091 "can_share": true 00:07:27.091 } 00:07:27.091 } 00:07:27.091 ], 00:07:27.091 "mp_policy": "active_passive" 00:07:27.091 } 00:07:27.091 } 00:07:27.091 ] 00:07:27.091 21:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3757740 00:07:27.091 21:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:27.091 21:04:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:27.091 Running I/O for 10 seconds... 00:07:28.030 Latency(us) 00:07:28.030 [2024-11-26T20:04:15.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.030 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.030 Nvme0n1 : 1.00 37376.00 146.00 0.00 0.00 0.00 0.00 0.00 00:07:28.030 [2024-11-26T20:04:15.466Z] =================================================================================================================== 00:07:28.030 [2024-11-26T20:04:15.466Z] Total : 37376.00 146.00 0.00 0.00 0.00 0.00 0.00 00:07:28.030 00:07:28.967 21:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:28.967 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.967 Nvme0n1 : 2.00 37808.50 147.69 0.00 0.00 0.00 0.00 0.00 00:07:28.967 [2024-11-26T20:04:16.403Z] =================================================================================================================== 00:07:28.967 [2024-11-26T20:04:16.403Z] Total : 37808.50 147.69 0.00 0.00 0.00 0.00 0.00 00:07:28.967 00:07:29.227 true 00:07:29.227 21:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:29.227 21:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:29.485 21:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:29.485 21:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:29.485 21:04:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3757740 00:07:30.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.054 Nvme0n1 : 3.00 37961.33 148.29 0.00 0.00 0.00 0.00 0.00 00:07:30.054 [2024-11-26T20:04:17.490Z] =================================================================================================================== 00:07:30.054 [2024-11-26T20:04:17.490Z] Total : 37961.33 148.29 0.00 0.00 0.00 0.00 0.00 00:07:30.054 00:07:30.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.992 Nvme0n1 : 4.00 38095.50 148.81 0.00 0.00 0.00 0.00 0.00 00:07:30.992 [2024-11-26T20:04:18.428Z] =================================================================================================================== 00:07:30.992 [2024-11-26T20:04:18.428Z] Total : 38095.50 148.81 0.00 0.00 0.00 0.00 0.00 00:07:30.992 00:07:32.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.374 Nvme0n1 : 5.00 38196.60 149.21 0.00 0.00 0.00 0.00 0.00 00:07:32.374 [2024-11-26T20:04:19.810Z] =================================================================================================================== 00:07:32.374 [2024-11-26T20:04:19.810Z] Total : 38196.60 149.21 0.00 0.00 0.00 0.00 0.00 00:07:32.374 00:07:33.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.310 Nvme0n1 : 6.00 38186.00 149.16 0.00 0.00 0.00 0.00 0.00 00:07:33.310 [2024-11-26T20:04:20.746Z] =================================================================================================================== 00:07:33.310 [2024-11-26T20:04:20.746Z] Total : 38186.00 149.16 0.00 0.00 0.00 0.00 0.00 00:07:33.310 00:07:34.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.243 Nvme0n1 : 7.00 38198.29 149.21 0.00 0.00 0.00 0.00 0.00 00:07:34.243 [2024-11-26T20:04:21.679Z] =================================================================================================================== 00:07:34.243 [2024-11-26T20:04:21.679Z] Total : 38198.29 149.21 0.00 0.00 0.00 0.00 0.00 00:07:34.243 00:07:35.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.179 Nvme0n1 : 8.00 38245.12 149.40 0.00 0.00 0.00 0.00 0.00 00:07:35.179 [2024-11-26T20:04:22.615Z] =================================================================================================================== 00:07:35.179 [2024-11-26T20:04:22.615Z] Total : 38245.12 149.40 0.00 0.00 0.00 0.00 0.00 00:07:35.179 00:07:36.116 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.116 Nvme0n1 : 9.00 38292.78 149.58 0.00 0.00 0.00 0.00 0.00 00:07:36.116 [2024-11-26T20:04:23.552Z] =================================================================================================================== 00:07:36.116 [2024-11-26T20:04:23.552Z] Total : 38292.78 149.58 0.00 0.00 0.00 0.00 0.00 00:07:36.116 00:07:37.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.053 Nvme0n1 : 10.00 38323.90 149.70 0.00 0.00 0.00 0.00 0.00 00:07:37.053 [2024-11-26T20:04:24.489Z] =================================================================================================================== 00:07:37.053 [2024-11-26T20:04:24.489Z] Total : 38323.90 149.70 0.00 0.00 0.00 0.00 0.00 00:07:37.053 00:07:37.053 00:07:37.053 Latency(us) 00:07:37.053 [2024-11-26T20:04:24.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.053 Nvme0n1 : 10.00 38322.74 149.70 0.00 0.00 3337.39 2475.80 15922.82 00:07:37.053 [2024-11-26T20:04:24.489Z] =================================================================================================================== 00:07:37.053 [2024-11-26T20:04:24.489Z] Total : 38322.74 149.70 0.00 0.00 3337.39 2475.80 15922.82 00:07:37.053 { 00:07:37.053 "results": [ 00:07:37.053 { 00:07:37.053 "job": "Nvme0n1", 00:07:37.053 "core_mask": "0x2", 00:07:37.053 "workload": "randwrite", 00:07:37.053 "status": "finished", 00:07:37.053 "queue_depth": 128, 00:07:37.053 "io_size": 4096, 00:07:37.053 "runtime": 10.002807, 00:07:37.053 "iops": 38322.74280609433, 00:07:37.053 "mibps": 149.69821408630597, 00:07:37.053 "io_failed": 0, 00:07:37.053 "io_timeout": 0, 00:07:37.053 "avg_latency_us": 3337.392944946616, 00:07:37.053 "min_latency_us": 2475.8044444444445, 00:07:37.053 "max_latency_us": 15922.82074074074 00:07:37.053 } 00:07:37.053 ], 00:07:37.053 "core_count": 1 00:07:37.053 } 00:07:37.053 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3757615 00:07:37.053 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3757615 ']' 00:07:37.053 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3757615 00:07:37.053 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:37.053 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.053 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3757615 00:07:37.313 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:37.313 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:37.313 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3757615' 00:07:37.313 killing process with pid 3757615 00:07:37.313 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3757615 00:07:37.313 Received shutdown signal, test time was about 10.000000 seconds 00:07:37.313 00:07:37.313 Latency(us) 00:07:37.313 [2024-11-26T20:04:24.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.313 [2024-11-26T20:04:24.749Z] =================================================================================================================== 00:07:37.313 [2024-11-26T20:04:24.749Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:37.313 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3757615 00:07:37.313 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:37.572 21:04:24 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3754291 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3754291 00:07:37.831 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3754291 Killed "${NVMF_APP[@]}" "$@" 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3759788 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3759788 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3759788 ']' 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.831 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:38.090 [2024-11-26 21:04:25.287608] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:07:38.090 [2024-11-26 21:04:25.287651] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:38.090 [2024-11-26 21:04:25.346335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.090 [2024-11-26 21:04:25.383995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:38.090 [2024-11-26 21:04:25.384028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:38.090 [2024-11-26 21:04:25.384034] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:38.090 [2024-11-26 21:04:25.384039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:38.090 [2024-11-26 21:04:25.384043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:38.090 [2024-11-26 21:04:25.384512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.090 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.090 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:38.090 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:38.090 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.090 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:38.090 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:38.090 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.348 [2024-11-26 21:04:25.667790] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:38.348 [2024-11-26 21:04:25.667884] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:38.348 [2024-11-26 21:04:25.667907] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:38.348 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:38.348 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4ab2e424-c32f-4bf4-aa79-f59075d1f918 00:07:38.348 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4ab2e424-c32f-4bf4-aa79-f59075d1f918 00:07:38.348 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.348 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:38.348 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.348 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.348 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:38.607 21:04:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4ab2e424-c32f-4bf4-aa79-f59075d1f918 -t 2000 00:07:38.607 [ 00:07:38.607 { 00:07:38.607 "name": "4ab2e424-c32f-4bf4-aa79-f59075d1f918", 00:07:38.607 "aliases": [ 00:07:38.607 "lvs/lvol" 00:07:38.607 ], 00:07:38.607 "product_name": "Logical Volume", 00:07:38.607 "block_size": 4096, 00:07:38.607 "num_blocks": 38912, 00:07:38.607 "uuid": "4ab2e424-c32f-4bf4-aa79-f59075d1f918", 00:07:38.607 "assigned_rate_limits": { 00:07:38.607 "rw_ios_per_sec": 0, 00:07:38.607 "rw_mbytes_per_sec": 0, 00:07:38.607 "r_mbytes_per_sec": 0, 00:07:38.607 "w_mbytes_per_sec": 0 00:07:38.607 }, 00:07:38.607 "claimed": false, 00:07:38.607 "zoned": false, 00:07:38.607 "supported_io_types": { 00:07:38.607 "read": true, 00:07:38.607 "write": true, 00:07:38.607 "unmap": true, 00:07:38.607 "flush": false, 00:07:38.607 "reset": true, 00:07:38.607 "nvme_admin": false, 00:07:38.607 "nvme_io": false, 00:07:38.607 "nvme_io_md": false, 00:07:38.607 "write_zeroes": true, 00:07:38.607 "zcopy": false, 00:07:38.607 "get_zone_info": false, 00:07:38.607 "zone_management": false, 00:07:38.607 "zone_append": false, 00:07:38.607 "compare": false, 00:07:38.607 "compare_and_write": false, 00:07:38.607 "abort": false, 00:07:38.607 "seek_hole": true, 00:07:38.607 "seek_data": true, 00:07:38.607 "copy": false, 00:07:38.607 "nvme_iov_md": false 00:07:38.607 }, 00:07:38.607 "driver_specific": { 00:07:38.607 "lvol": { 00:07:38.607 "lvol_store_uuid": "b5efb99d-49aa-4616-9862-4f674fa4cdfe", 00:07:38.607 "base_bdev": "aio_bdev", 00:07:38.607 "thin_provision": false, 00:07:38.607 "num_allocated_clusters": 38, 00:07:38.607 "snapshot": false, 00:07:38.607 "clone": false, 00:07:38.607 "esnap_clone": false 00:07:38.607 } 00:07:38.607 } 00:07:38.607 } 00:07:38.607 ] 00:07:38.607 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:38.607 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:38.607 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:38.867 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:38.867 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:38.867 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:39.126 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:39.126 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:39.126 [2024-11-26 21:04:26.516451] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:39.384 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:39.385 request: 00:07:39.385 { 00:07:39.385 "uuid": "b5efb99d-49aa-4616-9862-4f674fa4cdfe", 00:07:39.385 "method": "bdev_lvol_get_lvstores", 00:07:39.385 "req_id": 1 00:07:39.385 } 00:07:39.385 Got JSON-RPC error response 00:07:39.385 response: 00:07:39.385 { 00:07:39.385 "code": -19, 00:07:39.385 "message": "No such device" 00:07:39.385 } 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.385 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:39.644 aio_bdev 00:07:39.644 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4ab2e424-c32f-4bf4-aa79-f59075d1f918 00:07:39.644 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4ab2e424-c32f-4bf4-aa79-f59075d1f918 00:07:39.644 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.644 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:39.644 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.644 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.644 21:04:26 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:39.902 21:04:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4ab2e424-c32f-4bf4-aa79-f59075d1f918 -t 2000 00:07:39.902 [ 00:07:39.902 { 00:07:39.902 "name": "4ab2e424-c32f-4bf4-aa79-f59075d1f918", 00:07:39.902 "aliases": [ 00:07:39.902 "lvs/lvol" 00:07:39.902 ], 00:07:39.902 "product_name": "Logical Volume", 00:07:39.902 "block_size": 4096, 00:07:39.902 "num_blocks": 38912, 00:07:39.902 "uuid": "4ab2e424-c32f-4bf4-aa79-f59075d1f918", 00:07:39.902 "assigned_rate_limits": { 00:07:39.902 "rw_ios_per_sec": 0, 00:07:39.902 "rw_mbytes_per_sec": 0, 00:07:39.902 "r_mbytes_per_sec": 0, 00:07:39.902 "w_mbytes_per_sec": 0 00:07:39.902 }, 00:07:39.903 "claimed": false, 00:07:39.903 "zoned": false, 00:07:39.903 "supported_io_types": { 00:07:39.903 "read": true, 00:07:39.903 "write": true, 00:07:39.903 "unmap": true, 00:07:39.903 "flush": false, 00:07:39.903 "reset": true, 00:07:39.903 "nvme_admin": false, 00:07:39.903 "nvme_io": false, 00:07:39.903 "nvme_io_md": false, 00:07:39.903 "write_zeroes": true, 00:07:39.903 "zcopy": false, 00:07:39.903 "get_zone_info": false, 00:07:39.903 "zone_management": false, 00:07:39.903 "zone_append": false, 00:07:39.903 "compare": false, 00:07:39.903 "compare_and_write": false, 00:07:39.903 "abort": false, 00:07:39.903 "seek_hole": true, 00:07:39.903 "seek_data": true, 00:07:39.903 "copy": false, 00:07:39.903 "nvme_iov_md": false 00:07:39.903 }, 00:07:39.903 "driver_specific": { 00:07:39.903 "lvol": { 00:07:39.903 "lvol_store_uuid": "b5efb99d-49aa-4616-9862-4f674fa4cdfe", 00:07:39.903 "base_bdev": "aio_bdev", 00:07:39.903 "thin_provision": false, 00:07:39.903 "num_allocated_clusters": 38, 00:07:39.903 "snapshot": false, 00:07:39.903 "clone": false, 00:07:39.903 "esnap_clone": false 00:07:39.903 } 00:07:39.903 } 00:07:39.903 } 00:07:39.903 ] 00:07:39.903 21:04:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:39.903 21:04:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:39.903 21:04:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:40.162 21:04:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:40.162 21:04:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:40.162 21:04:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:40.422 21:04:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:40.422 21:04:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4ab2e424-c32f-4bf4-aa79-f59075d1f918 00:07:40.422 21:04:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5efb99d-49aa-4616-9862-4f674fa4cdfe 00:07:40.682 21:04:27 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:40.943 00:07:40.943 real 0m16.277s 00:07:40.943 user 0m43.098s 00:07:40.943 sys 0m2.648s 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.943 ************************************ 00:07:40.943 END TEST lvs_grow_dirty 00:07:40.943 ************************************ 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:40.943 nvmf_trace.0 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:40.943 rmmod nvme_rdma 00:07:40.943 rmmod nvme_fabrics 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3759788 ']' 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3759788 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3759788 ']' 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3759788 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3759788 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3759788' 00:07:40.943 killing process with pid 3759788 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3759788 00:07:40.943 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3759788 00:07:41.202 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:41.202 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:41.202 00:07:41.202 real 0m37.775s 00:07:41.202 user 1m3.093s 00:07:41.202 sys 0m8.055s 00:07:41.202 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.202 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:41.202 ************************************ 00:07:41.202 END TEST nvmf_lvs_grow 00:07:41.202 ************************************ 00:07:41.202 21:04:28 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:07:41.202 21:04:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:41.202 21:04:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.202 21:04:28 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:41.202 ************************************ 00:07:41.202 START TEST nvmf_bdev_io_wait 00:07:41.202 ************************************ 00:07:41.202 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:07:41.462 * Looking for test storage... 00:07:41.462 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.462 --rc genhtml_branch_coverage=1 00:07:41.462 --rc genhtml_function_coverage=1 00:07:41.462 --rc genhtml_legend=1 00:07:41.462 --rc geninfo_all_blocks=1 00:07:41.462 --rc geninfo_unexecuted_blocks=1 00:07:41.462 00:07:41.462 ' 00:07:41.462 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.463 --rc genhtml_branch_coverage=1 00:07:41.463 --rc genhtml_function_coverage=1 00:07:41.463 --rc genhtml_legend=1 00:07:41.463 --rc geninfo_all_blocks=1 00:07:41.463 --rc geninfo_unexecuted_blocks=1 00:07:41.463 00:07:41.463 ' 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.463 --rc genhtml_branch_coverage=1 00:07:41.463 --rc genhtml_function_coverage=1 00:07:41.463 --rc genhtml_legend=1 00:07:41.463 --rc geninfo_all_blocks=1 00:07:41.463 --rc geninfo_unexecuted_blocks=1 00:07:41.463 00:07:41.463 ' 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.463 --rc genhtml_branch_coverage=1 00:07:41.463 --rc genhtml_function_coverage=1 00:07:41.463 --rc genhtml_legend=1 00:07:41.463 --rc geninfo_all_blocks=1 00:07:41.463 --rc geninfo_unexecuted_blocks=1 00:07:41.463 00:07:41.463 ' 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:41.463 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:41.463 21:04:28 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.032 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:48.032 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:48.033 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:48.033 Found net devices under 0000:18:00.0: mlx_0_0 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:48.033 Found net devices under 0000:18:00.1: mlx_0_1 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:48.033 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:48.033 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:48.033 altname enp24s0f0np0 00:07:48.033 altname ens785f0np0 00:07:48.033 inet 192.168.100.8/24 scope global mlx_0_0 00:07:48.033 valid_lft forever preferred_lft forever 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:48.033 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:48.033 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:48.033 altname enp24s0f1np1 00:07:48.033 altname ens785f1np1 00:07:48.033 inet 192.168.100.9/24 scope global mlx_0_1 00:07:48.033 valid_lft forever preferred_lft forever 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:48.033 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:48.034 192.168.100.9' 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:48.034 192.168.100.9' 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:48.034 192.168.100.9' 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3763696 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3763696 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3763696 ']' 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.034 [2024-11-26 21:04:34.522742] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:07:48.034 [2024-11-26 21:04:34.522787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.034 [2024-11-26 21:04:34.580890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.034 [2024-11-26 21:04:34.618949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.034 [2024-11-26 21:04:34.618984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.034 [2024-11-26 21:04:34.618990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.034 [2024-11-26 21:04:34.618995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.034 [2024-11-26 21:04:34.619000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.034 [2024-11-26 21:04:34.620213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.034 [2024-11-26 21:04:34.620234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.034 [2024-11-26 21:04:34.620322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.034 [2024-11-26 21:04:34.620324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.034 [2024-11-26 21:04:34.790014] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16b25a0/0x16b6a90) succeed. 00:07:48.034 [2024-11-26 21:04:34.799036] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16b3c30/0x16f8130) succeed. 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.034 Malloc0 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:48.034 [2024-11-26 21:04:34.966204] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3763723 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3763727 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.034 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.034 { 00:07:48.034 "params": { 00:07:48.034 "name": "Nvme$subsystem", 00:07:48.035 "trtype": "$TEST_TRANSPORT", 00:07:48.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.035 "adrfam": "ipv4", 00:07:48.035 "trsvcid": "$NVMF_PORT", 00:07:48.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.035 "hdgst": ${hdgst:-false}, 00:07:48.035 "ddgst": ${ddgst:-false} 00:07:48.035 }, 00:07:48.035 "method": "bdev_nvme_attach_controller" 00:07:48.035 } 00:07:48.035 EOF 00:07:48.035 )") 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3763729 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.035 { 00:07:48.035 "params": { 00:07:48.035 "name": "Nvme$subsystem", 00:07:48.035 "trtype": "$TEST_TRANSPORT", 00:07:48.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.035 "adrfam": "ipv4", 00:07:48.035 "trsvcid": "$NVMF_PORT", 00:07:48.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.035 "hdgst": ${hdgst:-false}, 00:07:48.035 "ddgst": ${ddgst:-false} 00:07:48.035 }, 00:07:48.035 "method": "bdev_nvme_attach_controller" 00:07:48.035 } 00:07:48.035 EOF 00:07:48.035 )") 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3763732 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.035 { 00:07:48.035 "params": { 00:07:48.035 "name": "Nvme$subsystem", 00:07:48.035 "trtype": "$TEST_TRANSPORT", 00:07:48.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.035 "adrfam": "ipv4", 00:07:48.035 "trsvcid": "$NVMF_PORT", 00:07:48.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.035 "hdgst": ${hdgst:-false}, 00:07:48.035 "ddgst": ${ddgst:-false} 00:07:48.035 }, 00:07:48.035 "method": "bdev_nvme_attach_controller" 00:07:48.035 } 00:07:48.035 EOF 00:07:48.035 )") 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:48.035 { 00:07:48.035 "params": { 00:07:48.035 "name": "Nvme$subsystem", 00:07:48.035 "trtype": "$TEST_TRANSPORT", 00:07:48.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:48.035 "adrfam": "ipv4", 00:07:48.035 "trsvcid": "$NVMF_PORT", 00:07:48.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:48.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:48.035 "hdgst": ${hdgst:-false}, 00:07:48.035 "ddgst": ${ddgst:-false} 00:07:48.035 }, 00:07:48.035 "method": "bdev_nvme_attach_controller" 00:07:48.035 } 00:07:48.035 EOF 00:07:48.035 )") 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3763723 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.035 "params": { 00:07:48.035 "name": "Nvme1", 00:07:48.035 "trtype": "rdma", 00:07:48.035 "traddr": "192.168.100.8", 00:07:48.035 "adrfam": "ipv4", 00:07:48.035 "trsvcid": "4420", 00:07:48.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.035 "hdgst": false, 00:07:48.035 "ddgst": false 00:07:48.035 }, 00:07:48.035 "method": "bdev_nvme_attach_controller" 00:07:48.035 }' 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.035 "params": { 00:07:48.035 "name": "Nvme1", 00:07:48.035 "trtype": "rdma", 00:07:48.035 "traddr": "192.168.100.8", 00:07:48.035 "adrfam": "ipv4", 00:07:48.035 "trsvcid": "4420", 00:07:48.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.035 "hdgst": false, 00:07:48.035 "ddgst": false 00:07:48.035 }, 00:07:48.035 "method": "bdev_nvme_attach_controller" 00:07:48.035 }' 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.035 "params": { 00:07:48.035 "name": "Nvme1", 00:07:48.035 "trtype": "rdma", 00:07:48.035 "traddr": "192.168.100.8", 00:07:48.035 "adrfam": "ipv4", 00:07:48.035 "trsvcid": "4420", 00:07:48.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.035 "hdgst": false, 00:07:48.035 "ddgst": false 00:07:48.035 }, 00:07:48.035 "method": "bdev_nvme_attach_controller" 00:07:48.035 }' 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:48.035 21:04:34 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:48.035 "params": { 00:07:48.035 "name": "Nvme1", 00:07:48.035 "trtype": "rdma", 00:07:48.035 "traddr": "192.168.100.8", 00:07:48.035 "adrfam": "ipv4", 00:07:48.035 "trsvcid": "4420", 00:07:48.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:48.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:48.035 "hdgst": false, 00:07:48.035 "ddgst": false 00:07:48.035 }, 00:07:48.035 "method": "bdev_nvme_attach_controller" 00:07:48.035 }' 00:07:48.035 [2024-11-26 21:04:35.015385] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:07:48.035 [2024-11-26 21:04:35.015435] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:48.035 [2024-11-26 21:04:35.015631] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:07:48.035 [2024-11-26 21:04:35.015668] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:48.035 [2024-11-26 21:04:35.015918] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:07:48.035 [2024-11-26 21:04:35.015953] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:48.035 [2024-11-26 21:04:35.016906] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:07:48.035 [2024-11-26 21:04:35.016951] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:48.035 [2024-11-26 21:04:35.187279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.035 [2024-11-26 21:04:35.228435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:48.035 [2024-11-26 21:04:35.289412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.035 [2024-11-26 21:04:35.329996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:48.035 [2024-11-26 21:04:35.381631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.035 [2024-11-26 21:04:35.430078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:48.035 [2024-11-26 21:04:35.437631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.294 [2024-11-26 21:04:35.477478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:48.294 Running I/O for 1 seconds... 00:07:48.294 Running I/O for 1 seconds... 00:07:48.294 Running I/O for 1 seconds... 00:07:48.294 Running I/O for 1 seconds... 00:07:49.231 269264.00 IOPS, 1051.81 MiB/s 00:07:49.231 Latency(us) 00:07:49.231 [2024-11-26T20:04:36.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.231 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:49.231 Nvme1n1 : 1.00 268889.91 1050.35 0.00 0.00 473.91 201.77 1893.26 00:07:49.231 [2024-11-26T20:04:36.667Z] =================================================================================================================== 00:07:49.231 [2024-11-26T20:04:36.667Z] Total : 268889.91 1050.35 0.00 0.00 473.91 201.77 1893.26 00:07:49.231 18930.00 IOPS, 73.95 MiB/s 00:07:49.231 Latency(us) 00:07:49.231 [2024-11-26T20:04:36.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.231 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:49.231 Nvme1n1 : 1.01 18949.07 74.02 0.00 0.00 6729.98 3956.43 11796.48 00:07:49.231 [2024-11-26T20:04:36.667Z] =================================================================================================================== 00:07:49.231 [2024-11-26T20:04:36.667Z] Total : 18949.07 74.02 0.00 0.00 6729.98 3956.43 11796.48 00:07:49.231 18075.00 IOPS, 70.61 MiB/s 00:07:49.231 Latency(us) 00:07:49.231 [2024-11-26T20:04:36.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.231 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:49.231 Nvme1n1 : 1.01 18124.25 70.80 0.00 0.00 7042.95 3980.71 16602.45 00:07:49.231 [2024-11-26T20:04:36.667Z] =================================================================================================================== 00:07:49.231 [2024-11-26T20:04:36.667Z] Total : 18124.25 70.80 0.00 0.00 7042.95 3980.71 16602.45 00:07:49.231 15557.00 IOPS, 60.77 MiB/s 00:07:49.232 Latency(us) 00:07:49.232 [2024-11-26T20:04:36.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.232 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:49.232 Nvme1n1 : 1.01 15649.09 61.13 0.00 0.00 8160.34 3034.07 19515.16 00:07:49.232 [2024-11-26T20:04:36.668Z] =================================================================================================================== 00:07:49.232 [2024-11-26T20:04:36.668Z] Total : 15649.09 61.13 0.00 0.00 8160.34 3034.07 19515.16 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3763727 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3763729 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3763732 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:49.491 rmmod nvme_rdma 00:07:49.491 rmmod nvme_fabrics 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3763696 ']' 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3763696 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3763696 ']' 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3763696 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3763696 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3763696' 00:07:49.491 killing process with pid 3763696 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3763696 00:07:49.491 21:04:36 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3763696 00:07:49.750 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:49.750 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:49.750 00:07:49.750 real 0m8.508s 00:07:49.750 user 0m16.590s 00:07:49.750 sys 0m5.546s 00:07:49.750 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.750 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.750 ************************************ 00:07:49.750 END TEST nvmf_bdev_io_wait 00:07:49.750 ************************************ 00:07:49.750 21:04:37 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:07:49.750 21:04:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.750 21:04:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.750 21:04:37 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.750 ************************************ 00:07:49.750 START TEST nvmf_queue_depth 00:07:49.750 ************************************ 00:07:49.750 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:07:50.011 * Looking for test storage... 00:07:50.011 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.011 --rc genhtml_branch_coverage=1 00:07:50.011 --rc genhtml_function_coverage=1 00:07:50.011 --rc genhtml_legend=1 00:07:50.011 --rc geninfo_all_blocks=1 00:07:50.011 --rc geninfo_unexecuted_blocks=1 00:07:50.011 00:07:50.011 ' 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.011 --rc genhtml_branch_coverage=1 00:07:50.011 --rc genhtml_function_coverage=1 00:07:50.011 --rc genhtml_legend=1 00:07:50.011 --rc geninfo_all_blocks=1 00:07:50.011 --rc geninfo_unexecuted_blocks=1 00:07:50.011 00:07:50.011 ' 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.011 --rc genhtml_branch_coverage=1 00:07:50.011 --rc genhtml_function_coverage=1 00:07:50.011 --rc genhtml_legend=1 00:07:50.011 --rc geninfo_all_blocks=1 00:07:50.011 --rc geninfo_unexecuted_blocks=1 00:07:50.011 00:07:50.011 ' 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.011 --rc genhtml_branch_coverage=1 00:07:50.011 --rc genhtml_function_coverage=1 00:07:50.011 --rc genhtml_legend=1 00:07:50.011 --rc geninfo_all_blocks=1 00:07:50.011 --rc geninfo_unexecuted_blocks=1 00:07:50.011 00:07:50.011 ' 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:50.011 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.012 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:50.012 21:04:37 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:07:56.582 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:56.582 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:07:56.583 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:56.583 Found net devices under 0000:18:00.0: mlx_0_0 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:56.583 Found net devices under 0000:18:00.1: mlx_0_1 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:56.583 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:56.583 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:07:56.583 altname enp24s0f0np0 00:07:56.583 altname ens785f0np0 00:07:56.583 inet 192.168.100.8/24 scope global mlx_0_0 00:07:56.583 valid_lft forever preferred_lft forever 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:56.583 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:56.583 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:07:56.583 altname enp24s0f1np1 00:07:56.583 altname ens785f1np1 00:07:56.583 inet 192.168.100.9/24 scope global mlx_0_1 00:07:56.583 valid_lft forever preferred_lft forever 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.583 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:56.584 192.168.100.9' 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:56.584 192.168.100.9' 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:56.584 192.168.100.9' 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3767528 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3767528 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3767528 ']' 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 [2024-11-26 21:04:43.293807] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:07:56.584 [2024-11-26 21:04:43.293858] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.584 [2024-11-26 21:04:43.355837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.584 [2024-11-26 21:04:43.395128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.584 [2024-11-26 21:04:43.395162] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.584 [2024-11-26 21:04:43.395168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.584 [2024-11-26 21:04:43.395175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.584 [2024-11-26 21:04:43.395180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.584 [2024-11-26 21:04:43.395668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 [2024-11-26 21:04:43.542680] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15a4610/0x15a8b00) succeed. 00:07:56.584 [2024-11-26 21:04:43.550443] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15a5ac0/0x15ea1a0) succeed. 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 Malloc0 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 [2024-11-26 21:04:43.629871] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3767555 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3767555 /var/tmp/bdevperf.sock 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3767555 ']' 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:56.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.584 [2024-11-26 21:04:43.679455] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:07:56.584 [2024-11-26 21:04:43.679496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3767555 ] 00:07:56.584 [2024-11-26 21:04:43.738221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.584 [2024-11-26 21:04:43.775607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:56.584 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.585 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.585 NVMe0n1 00:07:56.585 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.585 21:04:43 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:56.844 Running I/O for 10 seconds... 00:07:58.717 18432.00 IOPS, 72.00 MiB/s [2024-11-26T20:04:47.090Z] 18944.00 IOPS, 74.00 MiB/s [2024-11-26T20:04:48.472Z] 18971.33 IOPS, 74.11 MiB/s [2024-11-26T20:04:49.416Z] 18975.00 IOPS, 74.12 MiB/s [2024-11-26T20:04:50.358Z] 19046.40 IOPS, 74.40 MiB/s [2024-11-26T20:04:51.300Z] 19102.00 IOPS, 74.62 MiB/s [2024-11-26T20:04:52.238Z] 19071.57 IOPS, 74.50 MiB/s [2024-11-26T20:04:53.176Z] 19072.00 IOPS, 74.50 MiB/s [2024-11-26T20:04:54.114Z] 19114.67 IOPS, 74.67 MiB/s [2024-11-26T20:04:54.114Z] 19127.60 IOPS, 74.72 MiB/s 00:08:06.678 Latency(us) 00:08:06.678 [2024-11-26T20:04:54.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.678 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:06.678 Verification LBA range: start 0x0 length 0x4000 00:08:06.678 NVMe0n1 : 10.04 19137.78 74.76 0.00 0.00 53355.32 13107.20 35923.44 00:08:06.678 [2024-11-26T20:04:54.114Z] =================================================================================================================== 00:08:06.678 [2024-11-26T20:04:54.114Z] Total : 19137.78 74.76 0.00 0.00 53355.32 13107.20 35923.44 00:08:06.678 { 00:08:06.678 "results": [ 00:08:06.678 { 00:08:06.678 "job": "NVMe0n1", 00:08:06.678 "core_mask": "0x1", 00:08:06.678 "workload": "verify", 00:08:06.678 "status": "finished", 00:08:06.678 "verify_range": { 00:08:06.678 "start": 0, 00:08:06.678 "length": 16384 00:08:06.678 }, 00:08:06.678 "queue_depth": 1024, 00:08:06.678 "io_size": 4096, 00:08:06.678 "runtime": 10.037789, 00:08:06.678 "iops": 19137.78024224259, 00:08:06.678 "mibps": 74.75695407126011, 00:08:06.678 "io_failed": 0, 00:08:06.678 "io_timeout": 0, 00:08:06.678 "avg_latency_us": 53355.319332226274, 00:08:06.678 "min_latency_us": 13107.2, 00:08:06.678 "max_latency_us": 35923.43703703704 00:08:06.678 } 00:08:06.678 ], 00:08:06.678 "core_count": 1 00:08:06.678 } 00:08:06.937 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3767555 00:08:06.937 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3767555 ']' 00:08:06.937 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3767555 00:08:06.937 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:06.937 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.937 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3767555 00:08:06.937 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.937 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.937 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3767555' 00:08:06.937 killing process with pid 3767555 00:08:06.937 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3767555 00:08:06.937 Received shutdown signal, test time was about 10.000000 seconds 00:08:06.937 00:08:06.937 Latency(us) 00:08:06.937 [2024-11-26T20:04:54.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.937 [2024-11-26T20:04:54.373Z] =================================================================================================================== 00:08:06.937 [2024-11-26T20:04:54.373Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:06.938 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3767555 00:08:06.938 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:06.938 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:06.938 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.938 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:06.938 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:06.938 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:06.938 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:06.938 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.938 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:06.938 rmmod nvme_rdma 00:08:07.197 rmmod nvme_fabrics 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3767528 ']' 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3767528 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3767528 ']' 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3767528 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3767528 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3767528' 00:08:07.197 killing process with pid 3767528 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3767528 00:08:07.197 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3767528 00:08:07.456 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.456 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:07.456 00:08:07.456 real 0m17.503s 00:08:07.456 user 0m23.771s 00:08:07.456 sys 0m5.128s 00:08:07.456 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.456 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:07.456 ************************************ 00:08:07.456 END TEST nvmf_queue_depth 00:08:07.456 ************************************ 00:08:07.456 21:04:54 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:07.456 21:04:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:07.456 21:04:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.456 21:04:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:07.456 ************************************ 00:08:07.456 START TEST nvmf_target_multipath 00:08:07.456 ************************************ 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:08:07.457 * Looking for test storage... 00:08:07.457 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.457 --rc genhtml_branch_coverage=1 00:08:07.457 --rc genhtml_function_coverage=1 00:08:07.457 --rc genhtml_legend=1 00:08:07.457 --rc geninfo_all_blocks=1 00:08:07.457 --rc geninfo_unexecuted_blocks=1 00:08:07.457 00:08:07.457 ' 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.457 --rc genhtml_branch_coverage=1 00:08:07.457 --rc genhtml_function_coverage=1 00:08:07.457 --rc genhtml_legend=1 00:08:07.457 --rc geninfo_all_blocks=1 00:08:07.457 --rc geninfo_unexecuted_blocks=1 00:08:07.457 00:08:07.457 ' 00:08:07.457 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:07.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.457 --rc genhtml_branch_coverage=1 00:08:07.457 --rc genhtml_function_coverage=1 00:08:07.457 --rc genhtml_legend=1 00:08:07.457 --rc geninfo_all_blocks=1 00:08:07.457 --rc geninfo_unexecuted_blocks=1 00:08:07.457 00:08:07.457 ' 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:07.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.718 --rc genhtml_branch_coverage=1 00:08:07.718 --rc genhtml_function_coverage=1 00:08:07.718 --rc genhtml_legend=1 00:08:07.718 --rc geninfo_all_blocks=1 00:08:07.718 --rc geninfo_unexecuted_blocks=1 00:08:07.718 00:08:07.718 ' 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.718 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.718 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:07.719 21:04:54 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:12.999 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:12.999 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:13.000 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:13.000 Found net devices under 0000:18:00.0: mlx_0_0 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:13.000 Found net devices under 0000:18:00.1: mlx_0_1 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:13.000 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:13.000 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:13.000 altname enp24s0f0np0 00:08:13.000 altname ens785f0np0 00:08:13.000 inet 192.168.100.8/24 scope global mlx_0_0 00:08:13.000 valid_lft forever preferred_lft forever 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:13.000 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:13.000 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:13.000 altname enp24s0f1np1 00:08:13.000 altname ens785f1np1 00:08:13.000 inet 192.168.100.9/24 scope global mlx_0_1 00:08:13.000 valid_lft forever preferred_lft forever 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:08:13.000 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:13.259 192.168.100.9' 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:13.259 192.168.100.9' 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:13.259 192.168.100.9' 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:08:13.259 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:08:13.260 run this test only with TCP transport for now 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:13.260 rmmod nvme_rdma 00:08:13.260 rmmod nvme_fabrics 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:13.260 00:08:13.260 real 0m5.823s 00:08:13.260 user 0m1.719s 00:08:13.260 sys 0m4.248s 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:13.260 ************************************ 00:08:13.260 END TEST nvmf_target_multipath 00:08:13.260 ************************************ 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.260 ************************************ 00:08:13.260 START TEST nvmf_zcopy 00:08:13.260 ************************************ 00:08:13.260 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:08:13.518 * Looking for test storage... 00:08:13.518 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:13.518 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:13.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.519 --rc genhtml_branch_coverage=1 00:08:13.519 --rc genhtml_function_coverage=1 00:08:13.519 --rc genhtml_legend=1 00:08:13.519 --rc geninfo_all_blocks=1 00:08:13.519 --rc geninfo_unexecuted_blocks=1 00:08:13.519 00:08:13.519 ' 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:13.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.519 --rc genhtml_branch_coverage=1 00:08:13.519 --rc genhtml_function_coverage=1 00:08:13.519 --rc genhtml_legend=1 00:08:13.519 --rc geninfo_all_blocks=1 00:08:13.519 --rc geninfo_unexecuted_blocks=1 00:08:13.519 00:08:13.519 ' 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:13.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.519 --rc genhtml_branch_coverage=1 00:08:13.519 --rc genhtml_function_coverage=1 00:08:13.519 --rc genhtml_legend=1 00:08:13.519 --rc geninfo_all_blocks=1 00:08:13.519 --rc geninfo_unexecuted_blocks=1 00:08:13.519 00:08:13.519 ' 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:13.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.519 --rc genhtml_branch_coverage=1 00:08:13.519 --rc genhtml_function_coverage=1 00:08:13.519 --rc genhtml_legend=1 00:08:13.519 --rc geninfo_all_blocks=1 00:08:13.519 --rc geninfo_unexecuted_blocks=1 00:08:13.519 00:08:13.519 ' 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.519 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.519 21:05:00 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:18.790 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:18.790 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:18.790 Found net devices under 0000:18:00.0: mlx_0_0 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:18.790 Found net devices under 0000:18:00.1: mlx_0_1 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:18.790 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:18.791 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:18.791 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:18.791 altname enp24s0f0np0 00:08:18.791 altname ens785f0np0 00:08:18.791 inet 192.168.100.8/24 scope global mlx_0_0 00:08:18.791 valid_lft forever preferred_lft forever 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:18.791 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:18.791 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:18.791 altname enp24s0f1np1 00:08:18.791 altname ens785f1np1 00:08:18.791 inet 192.168.100.9/24 scope global mlx_0_1 00:08:18.791 valid_lft forever preferred_lft forever 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:18.791 192.168.100.9' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:18.791 192.168.100.9' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:18.791 192.168.100.9' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3776146 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3776146 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3776146 ']' 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.791 21:05:05 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:18.791 [2024-11-26 21:05:06.028993] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:08:18.791 [2024-11-26 21:05:06.029040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.791 [2024-11-26 21:05:06.087240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.791 [2024-11-26 21:05:06.122935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.791 [2024-11-26 21:05:06.122972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.791 [2024-11-26 21:05:06.122980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.791 [2024-11-26 21:05:06.122985] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.791 [2024-11-26 21:05:06.122990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.791 [2024-11-26 21:05:06.123463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.791 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.791 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:18.791 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:18.791 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.791 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:08:19.052 Unsupported transport: rdma 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:19.052 nvmf_trace.0 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:19.052 rmmod nvme_rdma 00:08:19.052 rmmod nvme_fabrics 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3776146 ']' 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3776146 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3776146 ']' 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3776146 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3776146 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3776146' 00:08:19.052 killing process with pid 3776146 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3776146 00:08:19.052 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3776146 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:19.312 00:08:19.312 real 0m5.925s 00:08:19.312 user 0m2.227s 00:08:19.312 sys 0m4.151s 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:19.312 ************************************ 00:08:19.312 END TEST nvmf_zcopy 00:08:19.312 ************************************ 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.312 ************************************ 00:08:19.312 START TEST nvmf_nmic 00:08:19.312 ************************************ 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:08:19.312 * Looking for test storage... 00:08:19.312 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:19.312 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:19.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.628 --rc genhtml_branch_coverage=1 00:08:19.628 --rc genhtml_function_coverage=1 00:08:19.628 --rc genhtml_legend=1 00:08:19.628 --rc geninfo_all_blocks=1 00:08:19.628 --rc geninfo_unexecuted_blocks=1 00:08:19.628 00:08:19.628 ' 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:19.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.628 --rc genhtml_branch_coverage=1 00:08:19.628 --rc genhtml_function_coverage=1 00:08:19.628 --rc genhtml_legend=1 00:08:19.628 --rc geninfo_all_blocks=1 00:08:19.628 --rc geninfo_unexecuted_blocks=1 00:08:19.628 00:08:19.628 ' 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:19.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.628 --rc genhtml_branch_coverage=1 00:08:19.628 --rc genhtml_function_coverage=1 00:08:19.628 --rc genhtml_legend=1 00:08:19.628 --rc geninfo_all_blocks=1 00:08:19.628 --rc geninfo_unexecuted_blocks=1 00:08:19.628 00:08:19.628 ' 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:19.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.628 --rc genhtml_branch_coverage=1 00:08:19.628 --rc genhtml_function_coverage=1 00:08:19.628 --rc genhtml_legend=1 00:08:19.628 --rc geninfo_all_blocks=1 00:08:19.628 --rc geninfo_unexecuted_blocks=1 00:08:19.628 00:08:19.628 ' 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.628 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.629 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.629 21:05:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:25.048 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:25.048 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:25.048 Found net devices under 0000:18:00.0: mlx_0_0 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:25.048 Found net devices under 0000:18:00.1: mlx_0_1 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:25.048 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:25.049 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:25.049 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:25.049 altname enp24s0f0np0 00:08:25.049 altname ens785f0np0 00:08:25.049 inet 192.168.100.8/24 scope global mlx_0_0 00:08:25.049 valid_lft forever preferred_lft forever 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:25.049 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:25.049 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:25.049 altname enp24s0f1np1 00:08:25.049 altname ens785f1np1 00:08:25.049 inet 192.168.100.9/24 scope global mlx_0_1 00:08:25.049 valid_lft forever preferred_lft forever 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:25.049 192.168.100.9' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:25.049 192.168.100.9' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:25.049 192.168.100.9' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3779437 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3779437 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3779437 ']' 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:25.049 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.309 [2024-11-26 21:05:12.485127] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:08:25.309 [2024-11-26 21:05:12.485202] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.309 [2024-11-26 21:05:12.546431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.309 [2024-11-26 21:05:12.587494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.309 [2024-11-26 21:05:12.587532] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.309 [2024-11-26 21:05:12.587539] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.309 [2024-11-26 21:05:12.587545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.309 [2024-11-26 21:05:12.587549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.309 [2024-11-26 21:05:12.588766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.309 [2024-11-26 21:05:12.588860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.309 [2024-11-26 21:05:12.588923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.309 [2024-11-26 21:05:12.588925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.309 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.309 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:25.309 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:25.309 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:25.309 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.309 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.309 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:25.309 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.309 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.568 [2024-11-26 21:05:12.750715] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x811530/0x815a20) succeed. 00:08:25.568 [2024-11-26 21:05:12.758886] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x812bc0/0x8570c0) succeed. 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.568 Malloc0 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.568 [2024-11-26 21:05:12.930207] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:25.568 test case1: single bdev can't be used in multiple subsystems 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.568 [2024-11-26 21:05:12.953967] bdev.c:8278:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:25.568 [2024-11-26 21:05:12.953985] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:25.568 [2024-11-26 21:05:12.953991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.568 request: 00:08:25.568 { 00:08:25.568 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:25.568 "namespace": { 00:08:25.568 "bdev_name": "Malloc0", 00:08:25.568 "no_auto_visible": false 00:08:25.568 }, 00:08:25.568 "method": "nvmf_subsystem_add_ns", 00:08:25.568 "req_id": 1 00:08:25.568 } 00:08:25.568 Got JSON-RPC error response 00:08:25.568 response: 00:08:25.568 { 00:08:25.568 "code": -32602, 00:08:25.568 "message": "Invalid parameters" 00:08:25.568 } 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:25.568 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:25.569 Adding namespace failed - expected result. 00:08:25.569 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:25.569 test case2: host connect to nvmf target in multiple paths 00:08:25.569 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:08:25.569 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.569 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:25.569 [2024-11-26 21:05:12.966006] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:08:25.569 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.569 21:05:12 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:26.947 21:05:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:08:27.885 21:05:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:27.885 21:05:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:27.885 21:05:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:27.885 21:05:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:27.885 21:05:14 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:29.821 21:05:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:29.821 21:05:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:29.821 21:05:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:29.821 21:05:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:29.821 21:05:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:29.821 21:05:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:29.821 21:05:16 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:29.821 [global] 00:08:29.821 thread=1 00:08:29.821 invalidate=1 00:08:29.821 rw=write 00:08:29.821 time_based=1 00:08:29.821 runtime=1 00:08:29.821 ioengine=libaio 00:08:29.821 direct=1 00:08:29.821 bs=4096 00:08:29.821 iodepth=1 00:08:29.821 norandommap=0 00:08:29.821 numjobs=1 00:08:29.821 00:08:29.821 verify_dump=1 00:08:29.821 verify_backlog=512 00:08:29.821 verify_state_save=0 00:08:29.821 do_verify=1 00:08:29.821 verify=crc32c-intel 00:08:29.821 [job0] 00:08:29.821 filename=/dev/nvme0n1 00:08:29.821 Could not set queue depth (nvme0n1) 00:08:30.087 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:30.087 fio-3.35 00:08:30.087 Starting 1 thread 00:08:31.023 00:08:31.023 job0: (groupid=0, jobs=1): err= 0: pid=3780653: Tue Nov 26 21:05:18 2024 00:08:31.023 read: IOPS=7688, BW=30.0MiB/s (31.5MB/s)(30.1MiB/1001msec) 00:08:31.023 slat (nsec): min=6083, max=24001, avg=7286.46, stdev=653.30 00:08:31.023 clat (nsec): min=40276, max=72890, avg=53794.81, stdev=3043.04 00:08:31.023 lat (nsec): min=52667, max=79986, avg=61081.27, stdev=3084.66 00:08:31.023 clat percentiles (nsec): 00:08:31.023 | 1.00th=[47360], 5.00th=[48896], 10.00th=[49920], 20.00th=[50944], 00:08:31.023 | 30.00th=[51968], 40.00th=[52992], 50.00th=[53504], 60.00th=[54528], 00:08:31.023 | 70.00th=[55552], 80.00th=[56576], 90.00th=[57600], 95.00th=[58624], 00:08:31.023 | 99.00th=[61184], 99.50th=[62208], 99.90th=[64768], 99.95th=[66048], 00:08:31.023 | 99.99th=[73216] 00:08:31.023 write: IOPS=8183, BW=32.0MiB/s (33.5MB/s)(32.0MiB/1001msec); 0 zone resets 00:08:31.023 slat (nsec): min=8276, max=42983, avg=9019.89, stdev=843.61 00:08:31.023 clat (nsec): min=37525, max=76631, avg=51963.72, stdev=3219.51 00:08:31.024 lat (usec): min=52, max=119, avg=60.98, stdev= 3.34 00:08:31.024 clat percentiles (nsec): 00:08:31.024 | 1.00th=[45312], 5.00th=[46848], 10.00th=[47872], 20.00th=[48896], 00:08:31.024 | 30.00th=[49920], 40.00th=[50944], 50.00th=[51968], 60.00th=[52992], 00:08:31.024 | 70.00th=[53504], 80.00th=[54528], 90.00th=[56064], 95.00th=[57088], 00:08:31.024 | 99.00th=[59136], 99.50th=[60672], 99.90th=[62720], 99.95th=[63232], 00:08:31.024 | 99.99th=[76288] 00:08:31.024 bw ( KiB/s): min=32768, max=32768, per=100.00%, avg=32768.00, stdev= 0.00, samples=1 00:08:31.024 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=1 00:08:31.024 lat (usec) : 50=19.82%, 100=80.18% 00:08:31.024 cpu : usr=8.40%, sys=17.60%, ctx=15888, majf=0, minf=1 00:08:31.024 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:31.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.024 issued rwts: total=7696,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.024 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:31.024 00:08:31.024 Run status group 0 (all jobs): 00:08:31.024 READ: bw=30.0MiB/s (31.5MB/s), 30.0MiB/s-30.0MiB/s (31.5MB/s-31.5MB/s), io=30.1MiB (31.5MB), run=1001-1001msec 00:08:31.024 WRITE: bw=32.0MiB/s (33.5MB/s), 32.0MiB/s-32.0MiB/s (33.5MB/s-33.5MB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:08:31.024 00:08:31.024 Disk stats (read/write): 00:08:31.024 nvme0n1: ios=7160/7168, merge=0/0, ticks=338/311, in_queue=649, util=90.68% 00:08:31.282 21:05:18 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:33.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:33.188 rmmod nvme_rdma 00:08:33.188 rmmod nvme_fabrics 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3779437 ']' 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3779437 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3779437 ']' 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3779437 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3779437 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3779437' 00:08:33.188 killing process with pid 3779437 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3779437 00:08:33.188 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3779437 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:33.447 00:08:33.447 real 0m14.086s 00:08:33.447 user 0m41.719s 00:08:33.447 sys 0m5.119s 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:33.447 ************************************ 00:08:33.447 END TEST nvmf_nmic 00:08:33.447 ************************************ 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:33.447 ************************************ 00:08:33.447 START TEST nvmf_fio_target 00:08:33.447 ************************************ 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:08:33.447 * Looking for test storage... 00:08:33.447 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:33.447 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:33.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.708 --rc genhtml_branch_coverage=1 00:08:33.708 --rc genhtml_function_coverage=1 00:08:33.708 --rc genhtml_legend=1 00:08:33.708 --rc geninfo_all_blocks=1 00:08:33.708 --rc geninfo_unexecuted_blocks=1 00:08:33.708 00:08:33.708 ' 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:33.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.708 --rc genhtml_branch_coverage=1 00:08:33.708 --rc genhtml_function_coverage=1 00:08:33.708 --rc genhtml_legend=1 00:08:33.708 --rc geninfo_all_blocks=1 00:08:33.708 --rc geninfo_unexecuted_blocks=1 00:08:33.708 00:08:33.708 ' 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:33.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.708 --rc genhtml_branch_coverage=1 00:08:33.708 --rc genhtml_function_coverage=1 00:08:33.708 --rc genhtml_legend=1 00:08:33.708 --rc geninfo_all_blocks=1 00:08:33.708 --rc geninfo_unexecuted_blocks=1 00:08:33.708 00:08:33.708 ' 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:33.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:33.708 --rc genhtml_branch_coverage=1 00:08:33.708 --rc genhtml_function_coverage=1 00:08:33.708 --rc genhtml_legend=1 00:08:33.708 --rc geninfo_all_blocks=1 00:08:33.708 --rc geninfo_unexecuted_blocks=1 00:08:33.708 00:08:33.708 ' 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:33.708 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:33.709 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:33.709 21:05:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:08:38.980 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:08:38.980 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:38.980 Found net devices under 0000:18:00.0: mlx_0_0 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:38.980 Found net devices under 0000:18:00.1: mlx_0_1 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:38.980 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:39.240 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:39.240 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:08:39.240 altname enp24s0f0np0 00:08:39.240 altname ens785f0np0 00:08:39.240 inet 192.168.100.8/24 scope global mlx_0_0 00:08:39.240 valid_lft forever preferred_lft forever 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:39.240 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:39.240 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:08:39.240 altname enp24s0f1np1 00:08:39.240 altname ens785f1np1 00:08:39.240 inet 192.168.100.9/24 scope global mlx_0_1 00:08:39.240 valid_lft forever preferred_lft forever 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:39.240 192.168.100.9' 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:39.240 192.168.100.9' 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:39.240 192.168.100.9' 00:08:39.240 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3784449 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3784449 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3784449 ']' 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.241 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:39.241 [2024-11-26 21:05:26.639874] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:08:39.241 [2024-11-26 21:05:26.639924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.500 [2024-11-26 21:05:26.701273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.500 [2024-11-26 21:05:26.739183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.500 [2024-11-26 21:05:26.739221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.500 [2024-11-26 21:05:26.739228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.500 [2024-11-26 21:05:26.739234] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.500 [2024-11-26 21:05:26.739240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.500 [2024-11-26 21:05:26.740472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.500 [2024-11-26 21:05:26.740558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.500 [2024-11-26 21:05:26.740644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.500 [2024-11-26 21:05:26.740645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.500 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.500 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:39.500 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.500 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.500 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:39.500 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.500 21:05:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:39.759 [2024-11-26 21:05:27.051939] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e0a530/0x1e0ea20) succeed. 00:08:39.759 [2024-11-26 21:05:27.060346] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e0bbc0/0x1e500c0) succeed. 00:08:40.017 21:05:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.017 21:05:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:40.017 21:05:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.276 21:05:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:40.276 21:05:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.535 21:05:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:40.535 21:05:27 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.794 21:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:40.794 21:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:40.794 21:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.052 21:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:41.052 21:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.311 21:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:41.311 21:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:41.570 21:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:41.570 21:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:41.570 21:05:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:41.829 21:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:41.829 21:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:42.087 21:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:42.087 21:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:42.346 21:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:42.346 [2024-11-26 21:05:29.688610] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:42.346 21:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:42.606 21:05:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:42.866 21:05:30 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:43.804 21:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:43.804 21:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:43.804 21:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:43.804 21:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:43.804 21:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:43.804 21:05:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:45.728 21:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:45.728 21:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:45.728 21:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:45.728 21:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:45.728 21:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:45.729 21:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:45.729 21:05:33 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:45.729 [global] 00:08:45.729 thread=1 00:08:45.729 invalidate=1 00:08:45.729 rw=write 00:08:45.729 time_based=1 00:08:45.729 runtime=1 00:08:45.729 ioengine=libaio 00:08:45.729 direct=1 00:08:45.729 bs=4096 00:08:45.729 iodepth=1 00:08:45.729 norandommap=0 00:08:45.729 numjobs=1 00:08:45.729 00:08:45.729 verify_dump=1 00:08:45.729 verify_backlog=512 00:08:45.729 verify_state_save=0 00:08:45.729 do_verify=1 00:08:45.729 verify=crc32c-intel 00:08:45.729 [job0] 00:08:45.729 filename=/dev/nvme0n1 00:08:45.729 [job1] 00:08:45.729 filename=/dev/nvme0n2 00:08:45.729 [job2] 00:08:45.729 filename=/dev/nvme0n3 00:08:45.729 [job3] 00:08:45.729 filename=/dev/nvme0n4 00:08:45.994 Could not set queue depth (nvme0n1) 00:08:45.994 Could not set queue depth (nvme0n2) 00:08:45.994 Could not set queue depth (nvme0n3) 00:08:45.994 Could not set queue depth (nvme0n4) 00:08:46.251 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.251 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.251 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.251 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.251 fio-3.35 00:08:46.251 Starting 4 threads 00:08:47.624 00:08:47.624 job0: (groupid=0, jobs=1): err= 0: pid=3785977: Tue Nov 26 21:05:34 2024 00:08:47.624 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:08:47.624 slat (nsec): min=6114, max=26728, avg=7041.89, stdev=903.60 00:08:47.624 clat (usec): min=59, max=187, avg=109.37, stdev=25.35 00:08:47.624 lat (usec): min=68, max=194, avg=116.41, stdev=25.39 00:08:47.624 clat percentiles (usec): 00:08:47.624 | 1.00th=[ 67], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 77], 00:08:47.624 | 30.00th=[ 84], 40.00th=[ 117], 50.00th=[ 121], 60.00th=[ 124], 00:08:47.624 | 70.00th=[ 126], 80.00th=[ 129], 90.00th=[ 135], 95.00th=[ 139], 00:08:47.624 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 184], 99.95th=[ 186], 00:08:47.624 | 99.99th=[ 188] 00:08:47.624 write: IOPS=4574, BW=17.9MiB/s (18.7MB/s)(17.9MiB/1001msec); 0 zone resets 00:08:47.624 slat (nsec): min=6838, max=41667, avg=9269.57, stdev=1034.05 00:08:47.624 clat (usec): min=46, max=182, avg=101.30, stdev=24.46 00:08:47.624 lat (usec): min=56, max=192, avg=110.56, stdev=24.53 00:08:47.625 clat percentiles (usec): 00:08:47.625 | 1.00th=[ 63], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 73], 00:08:47.625 | 30.00th=[ 77], 40.00th=[ 108], 50.00th=[ 113], 60.00th=[ 116], 00:08:47.625 | 70.00th=[ 119], 80.00th=[ 121], 90.00th=[ 126], 95.00th=[ 130], 00:08:47.625 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 174], 99.95th=[ 178], 00:08:47.625 | 99.99th=[ 184] 00:08:47.625 bw ( KiB/s): min=20480, max=20480, per=30.55%, avg=20480.00, stdev= 0.00, samples=1 00:08:47.625 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:08:47.625 lat (usec) : 50=0.01%, 100=35.82%, 250=64.17% 00:08:47.625 cpu : usr=4.50%, sys=7.00%, ctx=8675, majf=0, minf=1 00:08:47.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.625 issued rwts: total=4096,4579,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.625 job1: (groupid=0, jobs=1): err= 0: pid=3785978: Tue Nov 26 21:05:34 2024 00:08:47.625 read: IOPS=3876, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1001msec) 00:08:47.625 slat (nsec): min=6201, max=32224, avg=6950.32, stdev=834.83 00:08:47.625 clat (usec): min=74, max=347, avg=119.48, stdev=14.87 00:08:47.625 lat (usec): min=80, max=354, avg=126.43, stdev=14.82 00:08:47.625 clat percentiles (usec): 00:08:47.625 | 1.00th=[ 94], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 110], 00:08:47.625 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:08:47.625 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 143], 95.00th=[ 151], 00:08:47.625 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 192], 99.95th=[ 210], 00:08:47.625 | 99.99th=[ 347] 00:08:47.625 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:08:47.625 slat (nsec): min=5546, max=48679, avg=9275.56, stdev=1106.62 00:08:47.625 clat (usec): min=55, max=200, avg=111.14, stdev=13.51 00:08:47.625 lat (usec): min=64, max=209, avg=120.41, stdev=13.52 00:08:47.625 clat percentiles (usec): 00:08:47.625 | 1.00th=[ 86], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 102], 00:08:47.625 | 30.00th=[ 105], 40.00th=[ 108], 50.00th=[ 110], 60.00th=[ 112], 00:08:47.625 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 133], 95.00th=[ 141], 00:08:47.625 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 186], 99.95th=[ 190], 00:08:47.625 | 99.99th=[ 200] 00:08:47.625 bw ( KiB/s): min=16384, max=16384, per=24.44%, avg=16384.00, stdev= 0.00, samples=1 00:08:47.625 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:47.625 lat (usec) : 100=9.22%, 250=90.77%, 500=0.01% 00:08:47.625 cpu : usr=4.20%, sys=6.40%, ctx=7977, majf=0, minf=1 00:08:47.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.625 issued rwts: total=3880,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.625 job2: (groupid=0, jobs=1): err= 0: pid=3785979: Tue Nov 26 21:05:34 2024 00:08:47.625 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:08:47.625 slat (nsec): min=6213, max=46064, avg=7475.95, stdev=1135.10 00:08:47.625 clat (usec): min=77, max=367, avg=127.64, stdev=15.80 00:08:47.625 lat (usec): min=84, max=374, avg=135.12, stdev=15.93 00:08:47.625 clat percentiles (usec): 00:08:47.625 | 1.00th=[ 85], 5.00th=[ 97], 10.00th=[ 116], 20.00th=[ 120], 00:08:47.625 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 129], 00:08:47.625 | 70.00th=[ 133], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 153], 00:08:47.625 | 99.00th=[ 169], 99.50th=[ 188], 99.90th=[ 202], 99.95th=[ 223], 00:08:47.625 | 99.99th=[ 367] 00:08:47.625 write: IOPS=3999, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1001msec); 0 zone resets 00:08:47.625 slat (nsec): min=8566, max=39373, avg=9585.66, stdev=1100.16 00:08:47.625 clat (usec): min=70, max=190, avg=115.46, stdev=16.77 00:08:47.625 lat (usec): min=79, max=205, avg=125.04, stdev=16.85 00:08:47.625 clat percentiles (usec): 00:08:47.625 | 1.00th=[ 76], 5.00th=[ 82], 10.00th=[ 87], 20.00th=[ 110], 00:08:47.625 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 120], 00:08:47.625 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 133], 95.00th=[ 141], 00:08:47.625 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 186], 00:08:47.625 | 99.99th=[ 192] 00:08:47.625 bw ( KiB/s): min=16384, max=16384, per=24.44%, avg=16384.00, stdev= 0.00, samples=1 00:08:47.625 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:47.625 lat (usec) : 100=10.56%, 250=89.43%, 500=0.01% 00:08:47.625 cpu : usr=2.80%, sys=7.30%, ctx=7587, majf=0, minf=1 00:08:47.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.625 issued rwts: total=3584,4003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.625 job3: (groupid=0, jobs=1): err= 0: pid=3785980: Tue Nov 26 21:05:34 2024 00:08:47.625 read: IOPS=3851, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1001msec) 00:08:47.625 slat (nsec): min=6490, max=20744, avg=7468.76, stdev=933.90 00:08:47.625 clat (usec): min=71, max=301, avg=119.44, stdev=14.81 00:08:47.625 lat (usec): min=79, max=309, avg=126.91, stdev=14.90 00:08:47.625 clat percentiles (usec): 00:08:47.625 | 1.00th=[ 97], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 111], 00:08:47.625 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:08:47.625 | 70.00th=[ 121], 80.00th=[ 126], 90.00th=[ 143], 95.00th=[ 151], 00:08:47.625 | 99.00th=[ 167], 99.50th=[ 176], 99.90th=[ 198], 99.95th=[ 208], 00:08:47.625 | 99.99th=[ 302] 00:08:47.625 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:08:47.625 slat (nsec): min=8725, max=41797, avg=9689.47, stdev=1186.55 00:08:47.625 clat (usec): min=70, max=191, avg=111.10, stdev=13.41 00:08:47.625 lat (usec): min=80, max=200, avg=120.79, stdev=13.56 00:08:47.625 clat percentiles (usec): 00:08:47.625 | 1.00th=[ 88], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 102], 00:08:47.625 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:08:47.625 | 70.00th=[ 113], 80.00th=[ 117], 90.00th=[ 133], 95.00th=[ 141], 00:08:47.625 | 99.00th=[ 157], 99.50th=[ 165], 99.90th=[ 178], 99.95th=[ 186], 00:08:47.625 | 99.99th=[ 192] 00:08:47.625 bw ( KiB/s): min=16384, max=16384, per=24.44%, avg=16384.00, stdev= 0.00, samples=1 00:08:47.625 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:47.625 lat (usec) : 100=7.70%, 250=92.29%, 500=0.01% 00:08:47.625 cpu : usr=3.50%, sys=7.20%, ctx=7951, majf=0, minf=1 00:08:47.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.625 issued rwts: total=3855,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.625 00:08:47.625 Run status group 0 (all jobs): 00:08:47.625 READ: bw=60.2MiB/s (63.1MB/s), 14.0MiB/s-16.0MiB/s (14.7MB/s-16.8MB/s), io=60.2MiB (63.1MB), run=1001-1001msec 00:08:47.625 WRITE: bw=65.5MiB/s (68.6MB/s), 15.6MiB/s-17.9MiB/s (16.4MB/s-18.7MB/s), io=65.5MiB (68.7MB), run=1001-1001msec 00:08:47.625 00:08:47.625 Disk stats (read/write): 00:08:47.625 nvme0n1: ios=3634/4038, merge=0/0, ticks=380/382, in_queue=762, util=87.37% 00:08:47.625 nvme0n2: ios=3250/3584, merge=0/0, ticks=372/385, in_queue=757, util=87.54% 00:08:47.625 nvme0n3: ios=3072/3463, merge=0/0, ticks=393/390, in_queue=783, util=89.26% 00:08:47.625 nvme0n4: ios=3226/3584, merge=0/0, ticks=371/383, in_queue=754, util=89.81% 00:08:47.625 21:05:34 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:47.625 [global] 00:08:47.625 thread=1 00:08:47.625 invalidate=1 00:08:47.625 rw=randwrite 00:08:47.625 time_based=1 00:08:47.625 runtime=1 00:08:47.625 ioengine=libaio 00:08:47.625 direct=1 00:08:47.625 bs=4096 00:08:47.625 iodepth=1 00:08:47.625 norandommap=0 00:08:47.625 numjobs=1 00:08:47.625 00:08:47.625 verify_dump=1 00:08:47.625 verify_backlog=512 00:08:47.625 verify_state_save=0 00:08:47.625 do_verify=1 00:08:47.625 verify=crc32c-intel 00:08:47.625 [job0] 00:08:47.625 filename=/dev/nvme0n1 00:08:47.625 [job1] 00:08:47.625 filename=/dev/nvme0n2 00:08:47.625 [job2] 00:08:47.625 filename=/dev/nvme0n3 00:08:47.625 [job3] 00:08:47.625 filename=/dev/nvme0n4 00:08:47.625 Could not set queue depth (nvme0n1) 00:08:47.625 Could not set queue depth (nvme0n2) 00:08:47.625 Could not set queue depth (nvme0n3) 00:08:47.625 Could not set queue depth (nvme0n4) 00:08:47.625 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.625 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.625 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.625 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:47.625 fio-3.35 00:08:47.625 Starting 4 threads 00:08:49.001 00:08:49.001 job0: (groupid=0, jobs=1): err= 0: pid=3786396: Tue Nov 26 21:05:36 2024 00:08:49.001 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:08:49.001 slat (nsec): min=6065, max=18558, avg=7305.59, stdev=825.91 00:08:49.001 clat (usec): min=72, max=234, avg=157.03, stdev=17.74 00:08:49.001 lat (usec): min=79, max=241, avg=164.34, stdev=17.70 00:08:49.001 clat percentiles (usec): 00:08:49.001 | 1.00th=[ 96], 5.00th=[ 137], 10.00th=[ 145], 20.00th=[ 149], 00:08:49.001 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:08:49.001 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 169], 95.00th=[ 182], 00:08:49.001 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 231], 99.95th=[ 235], 00:08:49.001 | 99.99th=[ 235] 00:08:49.001 write: IOPS=3128, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:08:49.001 slat (nsec): min=8098, max=40161, avg=9226.59, stdev=1089.14 00:08:49.001 clat (usec): min=64, max=215, avg=144.76, stdev=21.46 00:08:49.001 lat (usec): min=74, max=224, avg=153.99, stdev=21.51 00:08:49.001 clat percentiles (usec): 00:08:49.001 | 1.00th=[ 85], 5.00th=[ 96], 10.00th=[ 128], 20.00th=[ 137], 00:08:49.001 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:08:49.001 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 169], 95.00th=[ 188], 00:08:49.001 | 99.00th=[ 200], 99.50th=[ 202], 99.90th=[ 208], 99.95th=[ 215], 00:08:49.001 | 99.99th=[ 217] 00:08:49.001 bw ( KiB/s): min=12536, max=12536, per=20.30%, avg=12536.00, stdev= 0.00, samples=1 00:08:49.001 iops : min= 3134, max= 3134, avg=3134.00, stdev= 0.00, samples=1 00:08:49.001 lat (usec) : 100=4.14%, 250=95.86% 00:08:49.001 cpu : usr=3.60%, sys=4.70%, ctx=6204, majf=0, minf=1 00:08:49.001 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.002 issued rwts: total=3072,3132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.002 job1: (groupid=0, jobs=1): err= 0: pid=3786397: Tue Nov 26 21:05:36 2024 00:08:49.002 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:08:49.002 slat (nsec): min=6163, max=30162, avg=7394.53, stdev=939.71 00:08:49.002 clat (usec): min=73, max=238, avg=156.87, stdev=18.57 00:08:49.002 lat (usec): min=80, max=245, avg=164.27, stdev=18.57 00:08:49.002 clat percentiles (usec): 00:08:49.002 | 1.00th=[ 95], 5.00th=[ 128], 10.00th=[ 145], 20.00th=[ 149], 00:08:49.002 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:08:49.002 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 188], 00:08:49.002 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 227], 99.95th=[ 231], 00:08:49.002 | 99.99th=[ 239] 00:08:49.002 write: IOPS=3141, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:08:49.002 slat (nsec): min=6429, max=31243, avg=9167.91, stdev=922.56 00:08:49.002 clat (usec): min=61, max=216, avg=144.29, stdev=22.33 00:08:49.002 lat (usec): min=70, max=225, avg=153.45, stdev=22.34 00:08:49.002 clat percentiles (usec): 00:08:49.002 | 1.00th=[ 82], 5.00th=[ 94], 10.00th=[ 125], 20.00th=[ 137], 00:08:49.002 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:08:49.002 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 169], 95.00th=[ 188], 00:08:49.002 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 212], 99.95th=[ 217], 00:08:49.002 | 99.99th=[ 217] 00:08:49.002 bw ( KiB/s): min=12624, max=12624, per=20.44%, avg=12624.00, stdev= 0.00, samples=1 00:08:49.002 iops : min= 3156, max= 3156, avg=3156.00, stdev= 0.00, samples=1 00:08:49.002 lat (usec) : 100=4.68%, 250=95.32% 00:08:49.002 cpu : usr=2.90%, sys=5.40%, ctx=6217, majf=0, minf=2 00:08:49.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.002 issued rwts: total=3072,3145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.002 job2: (groupid=0, jobs=1): err= 0: pid=3786398: Tue Nov 26 21:05:36 2024 00:08:49.002 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:08:49.002 slat (nsec): min=6294, max=17129, avg=7618.91, stdev=830.44 00:08:49.002 clat (usec): min=76, max=225, avg=156.95, stdev=20.61 00:08:49.002 lat (usec): min=84, max=232, avg=164.57, stdev=20.61 00:08:49.002 clat percentiles (usec): 00:08:49.002 | 1.00th=[ 86], 5.00th=[ 108], 10.00th=[ 143], 20.00th=[ 149], 00:08:49.002 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:08:49.002 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 180], 95.00th=[ 192], 00:08:49.002 | 99.00th=[ 210], 99.50th=[ 215], 99.90th=[ 221], 99.95th=[ 225], 00:08:49.002 | 99.99th=[ 227] 00:08:49.002 write: IOPS=3112, BW=12.2MiB/s (12.8MB/s)(12.2MiB/1001msec); 0 zone resets 00:08:49.002 slat (nsec): min=8438, max=40460, avg=9448.92, stdev=1123.48 00:08:49.002 clat (usec): min=73, max=215, avg=145.13, stdev=22.72 00:08:49.002 lat (usec): min=82, max=224, avg=154.57, stdev=22.75 00:08:49.002 clat percentiles (usec): 00:08:49.002 | 1.00th=[ 80], 5.00th=[ 93], 10.00th=[ 121], 20.00th=[ 137], 00:08:49.002 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:08:49.002 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 184], 00:08:49.002 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 204], 99.95th=[ 212], 00:08:49.002 | 99.99th=[ 217] 00:08:49.002 bw ( KiB/s): min=12464, max=12464, per=20.19%, avg=12464.00, stdev= 0.00, samples=1 00:08:49.002 iops : min= 3116, max= 3116, avg=3116.00, stdev= 0.00, samples=1 00:08:49.002 lat (usec) : 100=5.03%, 250=94.97% 00:08:49.002 cpu : usr=3.50%, sys=5.10%, ctx=6188, majf=0, minf=1 00:08:49.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.002 issued rwts: total=3072,3116,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.002 job3: (groupid=0, jobs=1): err= 0: pid=3786399: Tue Nov 26 21:05:36 2024 00:08:49.002 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:08:49.002 slat (nsec): min=6504, max=24656, avg=7118.62, stdev=647.15 00:08:49.002 clat (usec): min=61, max=106, avg=77.57, stdev= 4.62 00:08:49.002 lat (usec): min=68, max=113, avg=84.68, stdev= 4.67 00:08:49.002 clat percentiles (usec): 00:08:49.002 | 1.00th=[ 69], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 74], 00:08:49.002 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 79], 00:08:49.002 | 70.00th=[ 80], 80.00th=[ 82], 90.00th=[ 84], 95.00th=[ 86], 00:08:49.002 | 99.00th=[ 91], 99.50th=[ 92], 99.90th=[ 95], 99.95th=[ 97], 00:08:49.002 | 99.99th=[ 108] 00:08:49.002 write: IOPS=6052, BW=23.6MiB/s (24.8MB/s)(23.7MiB/1001msec); 0 zone resets 00:08:49.002 slat (nsec): min=8297, max=35539, avg=9003.96, stdev=840.97 00:08:49.002 clat (usec): min=51, max=291, avg=73.52, stdev= 5.31 00:08:49.002 lat (usec): min=69, max=300, avg=82.52, stdev= 5.40 00:08:49.002 clat percentiles (usec): 00:08:49.002 | 1.00th=[ 65], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 71], 00:08:49.002 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 74], 60.00th=[ 75], 00:08:49.002 | 70.00th=[ 76], 80.00th=[ 78], 90.00th=[ 80], 95.00th=[ 82], 00:08:49.002 | 99.00th=[ 86], 99.50th=[ 87], 99.90th=[ 92], 99.95th=[ 94], 00:08:49.002 | 99.99th=[ 293] 00:08:49.002 bw ( KiB/s): min=24576, max=24576, per=39.80%, avg=24576.00, stdev= 0.00, samples=1 00:08:49.002 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:08:49.002 lat (usec) : 100=99.97%, 250=0.02%, 500=0.01% 00:08:49.002 cpu : usr=6.40%, sys=8.80%, ctx=11692, majf=0, minf=1 00:08:49.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.002 issued rwts: total=5632,6059,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.002 00:08:49.002 Run status group 0 (all jobs): 00:08:49.002 READ: bw=57.9MiB/s (60.8MB/s), 12.0MiB/s-22.0MiB/s (12.6MB/s-23.0MB/s), io=58.0MiB (60.8MB), run=1001-1001msec 00:08:49.002 WRITE: bw=60.3MiB/s (63.2MB/s), 12.2MiB/s-23.6MiB/s (12.8MB/s-24.8MB/s), io=60.4MiB (63.3MB), run=1001-1001msec 00:08:49.002 00:08:49.002 Disk stats (read/write): 00:08:49.002 nvme0n1: ios=2610/2805, merge=0/0, ticks=395/393, in_queue=788, util=87.58% 00:08:49.002 nvme0n2: ios=2560/2819, merge=0/0, ticks=384/392, in_queue=776, util=87.64% 00:08:49.002 nvme0n3: ios=2560/2791, merge=0/0, ticks=383/391, in_queue=774, util=89.27% 00:08:49.002 nvme0n4: ios=4973/5120, merge=0/0, ticks=363/368, in_queue=731, util=89.82% 00:08:49.002 21:05:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:49.002 [global] 00:08:49.002 thread=1 00:08:49.002 invalidate=1 00:08:49.002 rw=write 00:08:49.002 time_based=1 00:08:49.002 runtime=1 00:08:49.002 ioengine=libaio 00:08:49.002 direct=1 00:08:49.002 bs=4096 00:08:49.002 iodepth=128 00:08:49.002 norandommap=0 00:08:49.002 numjobs=1 00:08:49.002 00:08:49.002 verify_dump=1 00:08:49.002 verify_backlog=512 00:08:49.002 verify_state_save=0 00:08:49.002 do_verify=1 00:08:49.002 verify=crc32c-intel 00:08:49.002 [job0] 00:08:49.002 filename=/dev/nvme0n1 00:08:49.002 [job1] 00:08:49.002 filename=/dev/nvme0n2 00:08:49.002 [job2] 00:08:49.002 filename=/dev/nvme0n3 00:08:49.002 [job3] 00:08:49.002 filename=/dev/nvme0n4 00:08:49.002 Could not set queue depth (nvme0n1) 00:08:49.002 Could not set queue depth (nvme0n2) 00:08:49.002 Could not set queue depth (nvme0n3) 00:08:49.002 Could not set queue depth (nvme0n4) 00:08:49.260 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:49.260 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:49.260 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:49.260 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:49.260 fio-3.35 00:08:49.260 Starting 4 threads 00:08:50.658 00:08:50.658 job0: (groupid=0, jobs=1): err= 0: pid=3786826: Tue Nov 26 21:05:37 2024 00:08:50.658 read: IOPS=7530, BW=29.4MiB/s (30.8MB/s)(29.4MiB/1001msec) 00:08:50.658 slat (nsec): min=1245, max=4717.0k, avg=66470.91, stdev=303909.61 00:08:50.658 clat (usec): min=358, max=18656, avg=8522.53, stdev=3407.34 00:08:50.658 lat (usec): min=759, max=18664, avg=8589.00, stdev=3427.63 00:08:50.658 clat percentiles (usec): 00:08:50.658 | 1.00th=[ 3425], 5.00th=[ 4883], 10.00th=[ 5145], 20.00th=[ 5669], 00:08:50.658 | 30.00th=[ 6194], 40.00th=[ 6652], 50.00th=[ 7111], 60.00th=[ 8586], 00:08:50.658 | 70.00th=[10290], 80.00th=[11600], 90.00th=[13042], 95.00th=[15139], 00:08:50.658 | 99.00th=[18482], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:08:50.658 | 99.99th=[18744] 00:08:50.658 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec); 0 zone resets 00:08:50.658 slat (nsec): min=1805, max=3981.1k, avg=61093.80, stdev=275929.29 00:08:50.658 clat (usec): min=3597, max=18381, avg=8114.52, stdev=3096.94 00:08:50.658 lat (usec): min=3605, max=18384, avg=8175.61, stdev=3116.04 00:08:50.658 clat percentiles (usec): 00:08:50.658 | 1.00th=[ 4424], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 5604], 00:08:50.658 | 30.00th=[ 5997], 40.00th=[ 6390], 50.00th=[ 6849], 60.00th=[ 8094], 00:08:50.658 | 70.00th=[ 9241], 80.00th=[10945], 90.00th=[12911], 95.00th=[14222], 00:08:50.658 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:08:50.658 | 99.99th=[18482] 00:08:50.658 bw ( KiB/s): min=24576, max=24576, per=24.75%, avg=24576.00, stdev= 0.00, samples=1 00:08:50.658 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:08:50.658 lat (usec) : 500=0.01%, 1000=0.07% 00:08:50.658 lat (msec) : 2=0.14%, 4=0.71%, 10=70.57%, 20=28.50% 00:08:50.658 cpu : usr=3.70%, sys=6.20%, ctx=1804, majf=0, minf=1 00:08:50.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:50.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.658 issued rwts: total=7538,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.658 job1: (groupid=0, jobs=1): err= 0: pid=3786827: Tue Nov 26 21:05:37 2024 00:08:50.658 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:08:50.658 slat (nsec): min=1208, max=4730.1k, avg=95411.78, stdev=407667.59 00:08:50.658 clat (usec): min=954, max=20845, avg=12107.91, stdev=3691.26 00:08:50.658 lat (usec): min=1144, max=20854, avg=12203.32, stdev=3707.36 00:08:50.658 clat percentiles (usec): 00:08:50.658 | 1.00th=[ 3949], 5.00th=[ 5014], 10.00th=[ 7635], 20.00th=[ 9372], 00:08:50.658 | 30.00th=[10421], 40.00th=[11338], 50.00th=[12125], 60.00th=[12780], 00:08:50.658 | 70.00th=[13566], 80.00th=[15270], 90.00th=[17433], 95.00th=[18744], 00:08:50.658 | 99.00th=[19792], 99.50th=[20317], 99.90th=[20841], 99.95th=[20841], 00:08:50.658 | 99.99th=[20841] 00:08:50.658 write: IOPS=5569, BW=21.8MiB/s (22.8MB/s)(21.8MiB/1003msec); 0 zone resets 00:08:50.658 slat (nsec): min=1831, max=4231.4k, avg=86900.37, stdev=357513.77 00:08:50.658 clat (usec): min=1131, max=20881, avg=11595.53, stdev=3519.10 00:08:50.658 lat (usec): min=1143, max=20893, avg=11682.43, stdev=3530.70 00:08:50.658 clat percentiles (usec): 00:08:50.658 | 1.00th=[ 3621], 5.00th=[ 6259], 10.00th=[ 7046], 20.00th=[ 8455], 00:08:50.658 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11600], 60.00th=[12649], 00:08:50.658 | 70.00th=[13435], 80.00th=[13960], 90.00th=[16712], 95.00th=[17957], 00:08:50.658 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20841], 99.95th=[20841], 00:08:50.658 | 99.99th=[20841] 00:08:50.658 bw ( KiB/s): min=21448, max=22224, per=21.99%, avg=21836.00, stdev=548.71, samples=2 00:08:50.658 iops : min= 5362, max= 5556, avg=5459.00, stdev=137.18, samples=2 00:08:50.658 lat (usec) : 1000=0.01% 00:08:50.658 lat (msec) : 2=0.20%, 4=1.24%, 10=27.08%, 20=70.84%, 50=0.64% 00:08:50.658 cpu : usr=3.59%, sys=4.19%, ctx=1642, majf=0, minf=1 00:08:50.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:50.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.658 issued rwts: total=5120,5586,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.658 job2: (groupid=0, jobs=1): err= 0: pid=3786828: Tue Nov 26 21:05:37 2024 00:08:50.658 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:08:50.658 slat (nsec): min=1317, max=4979.8k, avg=84978.88, stdev=373671.06 00:08:50.658 clat (usec): min=3648, max=20909, avg=11220.77, stdev=3690.95 00:08:50.658 lat (usec): min=3650, max=21474, avg=11305.75, stdev=3705.95 00:08:50.658 clat percentiles (usec): 00:08:50.658 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7635], 00:08:50.658 | 30.00th=[ 8455], 40.00th=[10159], 50.00th=[11469], 60.00th=[12125], 00:08:50.658 | 70.00th=[13304], 80.00th=[14222], 90.00th=[15795], 95.00th=[18220], 00:08:50.658 | 99.00th=[19792], 99.50th=[20579], 99.90th=[20579], 99.95th=[20841], 00:08:50.658 | 99.99th=[20841] 00:08:50.658 write: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(23.9MiB/1003msec); 0 zone resets 00:08:50.658 slat (nsec): min=1842, max=3974.4k, avg=81396.41, stdev=354178.53 00:08:50.658 clat (usec): min=2179, max=21034, avg=10423.14, stdev=3736.04 00:08:50.658 lat (usec): min=2963, max=21045, avg=10504.53, stdev=3756.38 00:08:50.658 clat percentiles (usec): 00:08:50.658 | 1.00th=[ 4817], 5.00th=[ 5538], 10.00th=[ 5932], 20.00th=[ 6849], 00:08:50.658 | 30.00th=[ 7898], 40.00th=[ 8717], 50.00th=[ 9765], 60.00th=[10945], 00:08:50.658 | 70.00th=[12518], 80.00th=[13698], 90.00th=[15401], 95.00th=[17957], 00:08:50.658 | 99.00th=[18744], 99.50th=[19792], 99.90th=[20579], 99.95th=[20579], 00:08:50.658 | 99.99th=[21103] 00:08:50.658 bw ( KiB/s): min=23272, max=24576, per=24.10%, avg=23924.00, stdev=922.07, samples=2 00:08:50.658 iops : min= 5818, max= 6144, avg=5981.00, stdev=230.52, samples=2 00:08:50.658 lat (msec) : 4=0.33%, 10=45.62%, 20=53.50%, 50=0.55% 00:08:50.658 cpu : usr=1.90%, sys=6.39%, ctx=1588, majf=0, minf=2 00:08:50.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:50.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.658 issued rwts: total=5632,6108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.658 job3: (groupid=0, jobs=1): err= 0: pid=3786829: Tue Nov 26 21:05:37 2024 00:08:50.658 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:08:50.658 slat (nsec): min=1316, max=4910.2k, avg=89205.10, stdev=359344.99 00:08:50.658 clat (usec): min=3198, max=20168, avg=11552.36, stdev=4057.78 00:08:50.658 lat (usec): min=3207, max=20172, avg=11641.56, stdev=4083.25 00:08:50.658 clat percentiles (usec): 00:08:50.658 | 1.00th=[ 4686], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7570], 00:08:50.658 | 30.00th=[ 8291], 40.00th=[ 9634], 50.00th=[11207], 60.00th=[12649], 00:08:50.658 | 70.00th=[14484], 80.00th=[15926], 90.00th=[17171], 95.00th=[18220], 00:08:50.658 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19530], 99.95th=[19792], 00:08:50.658 | 99.99th=[20055] 00:08:50.658 write: IOPS=5506, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1003msec); 0 zone resets 00:08:50.658 slat (nsec): min=1877, max=4997.7k, avg=94282.14, stdev=367699.77 00:08:50.658 clat (usec): min=2245, max=23782, avg=12271.42, stdev=4441.61 00:08:50.658 lat (usec): min=4769, max=23791, avg=12365.70, stdev=4466.34 00:08:50.658 clat percentiles (usec): 00:08:50.658 | 1.00th=[ 5276], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 7635], 00:08:50.658 | 30.00th=[ 8586], 40.00th=[10421], 50.00th=[11731], 60.00th=[13042], 00:08:50.658 | 70.00th=[15139], 80.00th=[17171], 90.00th=[18482], 95.00th=[19530], 00:08:50.658 | 99.00th=[21890], 99.50th=[22152], 99.90th=[22676], 99.95th=[23725], 00:08:50.658 | 99.99th=[23725] 00:08:50.658 bw ( KiB/s): min=17568, max=25600, per=21.74%, avg=21584.00, stdev=5679.48, samples=2 00:08:50.658 iops : min= 4392, max= 6400, avg=5396.00, stdev=1419.87, samples=2 00:08:50.658 lat (msec) : 4=0.10%, 10=38.77%, 20=59.29%, 50=1.84% 00:08:50.658 cpu : usr=2.40%, sys=5.29%, ctx=1436, majf=0, minf=1 00:08:50.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:50.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:50.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:50.658 issued rwts: total=5120,5523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:50.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:50.658 00:08:50.658 Run status group 0 (all jobs): 00:08:50.658 READ: bw=91.2MiB/s (95.6MB/s), 19.9MiB/s-29.4MiB/s (20.9MB/s-30.8MB/s), io=91.4MiB (95.9MB), run=1001-1003msec 00:08:50.658 WRITE: bw=97.0MiB/s (102MB/s), 21.5MiB/s-30.0MiB/s (22.6MB/s-31.4MB/s), io=97.3MiB (102MB), run=1001-1003msec 00:08:50.658 00:08:50.658 Disk stats (read/write): 00:08:50.658 nvme0n1: ios=6181/6144, merge=0/0, ticks=15464/14796, in_queue=30260, util=87.47% 00:08:50.658 nvme0n2: ios=4427/4608, merge=0/0, ticks=16542/15064, in_queue=31606, util=87.15% 00:08:50.658 nvme0n3: ios=5120/5396, merge=0/0, ticks=15208/14968, in_queue=30176, util=88.85% 00:08:50.658 nvme0n4: ios=4608/4857, merge=0/0, ticks=15384/15633, in_queue=31017, util=89.72% 00:08:50.658 21:05:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:50.658 [global] 00:08:50.658 thread=1 00:08:50.658 invalidate=1 00:08:50.658 rw=randwrite 00:08:50.658 time_based=1 00:08:50.658 runtime=1 00:08:50.658 ioengine=libaio 00:08:50.658 direct=1 00:08:50.658 bs=4096 00:08:50.658 iodepth=128 00:08:50.658 norandommap=0 00:08:50.658 numjobs=1 00:08:50.658 00:08:50.658 verify_dump=1 00:08:50.658 verify_backlog=512 00:08:50.658 verify_state_save=0 00:08:50.659 do_verify=1 00:08:50.659 verify=crc32c-intel 00:08:50.659 [job0] 00:08:50.659 filename=/dev/nvme0n1 00:08:50.659 [job1] 00:08:50.659 filename=/dev/nvme0n2 00:08:50.659 [job2] 00:08:50.659 filename=/dev/nvme0n3 00:08:50.659 [job3] 00:08:50.659 filename=/dev/nvme0n4 00:08:50.659 Could not set queue depth (nvme0n1) 00:08:50.659 Could not set queue depth (nvme0n2) 00:08:50.659 Could not set queue depth (nvme0n3) 00:08:50.659 Could not set queue depth (nvme0n4) 00:08:50.920 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.920 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.920 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.920 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:50.920 fio-3.35 00:08:50.920 Starting 4 threads 00:08:52.292 00:08:52.292 job0: (groupid=0, jobs=1): err= 0: pid=3787245: Tue Nov 26 21:05:39 2024 00:08:52.292 read: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec) 00:08:52.292 slat (nsec): min=1234, max=4886.7k, avg=75402.37, stdev=352349.60 00:08:52.292 clat (usec): min=3255, max=21556, avg=10027.23, stdev=3940.77 00:08:52.292 lat (usec): min=3258, max=22169, avg=10102.63, stdev=3958.85 00:08:52.292 clat percentiles (usec): 00:08:52.292 | 1.00th=[ 4293], 5.00th=[ 5342], 10.00th=[ 5800], 20.00th=[ 6390], 00:08:52.292 | 30.00th=[ 6849], 40.00th=[ 8029], 50.00th=[ 9372], 60.00th=[10421], 00:08:52.292 | 70.00th=[11863], 80.00th=[13304], 90.00th=[16450], 95.00th=[17433], 00:08:52.292 | 99.00th=[19792], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:08:52.292 | 99.99th=[21627] 00:08:52.292 write: IOPS=6550, BW=25.6MiB/s (26.8MB/s)(25.7MiB/1003msec); 0 zone resets 00:08:52.292 slat (nsec): min=1709, max=4712.4k, avg=78196.59, stdev=332811.64 00:08:52.292 clat (usec): min=1105, max=19809, avg=9905.57, stdev=4255.40 00:08:52.292 lat (usec): min=2190, max=19811, avg=9983.77, stdev=4276.30 00:08:52.292 clat percentiles (usec): 00:08:52.292 | 1.00th=[ 4293], 5.00th=[ 5014], 10.00th=[ 5473], 20.00th=[ 5997], 00:08:52.292 | 30.00th=[ 6456], 40.00th=[ 7046], 50.00th=[ 8979], 60.00th=[10290], 00:08:52.292 | 70.00th=[11994], 80.00th=[14484], 90.00th=[16581], 95.00th=[17695], 00:08:52.292 | 99.00th=[19006], 99.50th=[19530], 99.90th=[19792], 99.95th=[19792], 00:08:52.292 | 99.99th=[19792] 00:08:52.292 bw ( KiB/s): min=21968, max=29516, per=24.88%, avg=25742.00, stdev=5337.24, samples=2 00:08:52.292 iops : min= 5492, max= 7379, avg=6435.50, stdev=1334.31, samples=2 00:08:52.292 lat (msec) : 2=0.01%, 4=0.45%, 10=56.01%, 20=43.13%, 50=0.41% 00:08:52.292 cpu : usr=3.29%, sys=4.89%, ctx=1756, majf=0, minf=1 00:08:52.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:52.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.292 issued rwts: total=6144,6570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.292 job1: (groupid=0, jobs=1): err= 0: pid=3787246: Tue Nov 26 21:05:39 2024 00:08:52.292 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:08:52.292 slat (nsec): min=1248, max=4795.4k, avg=75753.15, stdev=387414.87 00:08:52.292 clat (usec): min=2534, max=20902, avg=9842.97, stdev=3897.35 00:08:52.292 lat (usec): min=3000, max=20908, avg=9918.72, stdev=3917.46 00:08:52.292 clat percentiles (usec): 00:08:52.292 | 1.00th=[ 3916], 5.00th=[ 4555], 10.00th=[ 5145], 20.00th=[ 6063], 00:08:52.292 | 30.00th=[ 6980], 40.00th=[ 8225], 50.00th=[ 9503], 60.00th=[10683], 00:08:52.292 | 70.00th=[11731], 80.00th=[13435], 90.00th=[15926], 95.00th=[17171], 00:08:52.292 | 99.00th=[17433], 99.50th=[19006], 99.90th=[20841], 99.95th=[20841], 00:08:52.292 | 99.99th=[20841] 00:08:52.292 write: IOPS=6890, BW=26.9MiB/s (28.2MB/s)(27.0MiB/1002msec); 0 zone resets 00:08:52.292 slat (nsec): min=1820, max=4513.2k, avg=68095.53, stdev=315654.40 00:08:52.292 clat (usec): min=974, max=19430, avg=8892.88, stdev=3418.91 00:08:52.292 lat (usec): min=1431, max=19433, avg=8960.97, stdev=3433.77 00:08:52.292 clat percentiles (usec): 00:08:52.292 | 1.00th=[ 3064], 5.00th=[ 4047], 10.00th=[ 4883], 20.00th=[ 5604], 00:08:52.292 | 30.00th=[ 6325], 40.00th=[ 7439], 50.00th=[ 8586], 60.00th=[ 9765], 00:08:52.292 | 70.00th=[10814], 80.00th=[12256], 90.00th=[13829], 95.00th=[15008], 00:08:52.292 | 99.00th=[16909], 99.50th=[17171], 99.90th=[18744], 99.95th=[18744], 00:08:52.292 | 99.99th=[19530] 00:08:52.292 bw ( KiB/s): min=26560, max=26560, per=25.68%, avg=26560.00, stdev= 0.00, samples=1 00:08:52.292 iops : min= 6640, max= 6640, avg=6640.00, stdev= 0.00, samples=1 00:08:52.292 lat (usec) : 1000=0.01% 00:08:52.292 lat (msec) : 2=0.17%, 4=2.99%, 10=55.31%, 20=41.45%, 50=0.07% 00:08:52.292 cpu : usr=4.00%, sys=4.70%, ctx=1590, majf=0, minf=2 00:08:52.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:08:52.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.292 issued rwts: total=6656,6904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.292 job2: (groupid=0, jobs=1): err= 0: pid=3787247: Tue Nov 26 21:05:39 2024 00:08:52.292 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:08:52.292 slat (nsec): min=1292, max=5275.0k, avg=71863.72, stdev=324752.96 00:08:52.292 clat (usec): min=3176, max=20170, avg=9355.67, stdev=3116.71 00:08:52.292 lat (usec): min=3187, max=20173, avg=9427.53, stdev=3136.63 00:08:52.292 clat percentiles (usec): 00:08:52.292 | 1.00th=[ 4686], 5.00th=[ 5735], 10.00th=[ 6194], 20.00th=[ 6652], 00:08:52.292 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8225], 60.00th=[ 9241], 00:08:52.292 | 70.00th=[10814], 80.00th=[12518], 90.00th=[13829], 95.00th=[15008], 00:08:52.292 | 99.00th=[17695], 99.50th=[19268], 99.90th=[20055], 99.95th=[20055], 00:08:52.292 | 99.99th=[20055] 00:08:52.292 write: IOPS=6819, BW=26.6MiB/s (27.9MB/s)(26.7MiB/1002msec); 0 zone resets 00:08:52.292 slat (nsec): min=1803, max=4847.1k, avg=72319.66, stdev=300791.90 00:08:52.292 clat (usec): min=623, max=20277, avg=9371.81, stdev=3503.11 00:08:52.292 lat (usec): min=1642, max=20280, avg=9444.13, stdev=3527.13 00:08:52.292 clat percentiles (usec): 00:08:52.292 | 1.00th=[ 3851], 5.00th=[ 5538], 10.00th=[ 5866], 20.00th=[ 6390], 00:08:52.292 | 30.00th=[ 7177], 40.00th=[ 7635], 50.00th=[ 8029], 60.00th=[ 8979], 00:08:52.292 | 70.00th=[10814], 80.00th=[12780], 90.00th=[15008], 95.00th=[16581], 00:08:52.292 | 99.00th=[18482], 99.50th=[18744], 99.90th=[19268], 99.95th=[19268], 00:08:52.292 | 99.99th=[20317] 00:08:52.292 bw ( KiB/s): min=24576, max=24576, per=23.76%, avg=24576.00, stdev= 0.00, samples=1 00:08:52.292 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:08:52.292 lat (usec) : 750=0.01% 00:08:52.292 lat (msec) : 2=0.12%, 4=0.52%, 10=65.52%, 20=33.72%, 50=0.12% 00:08:52.292 cpu : usr=3.80%, sys=5.59%, ctx=1606, majf=0, minf=1 00:08:52.292 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:08:52.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.292 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.292 issued rwts: total=6656,6833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.292 job3: (groupid=0, jobs=1): err= 0: pid=3787248: Tue Nov 26 21:05:39 2024 00:08:52.292 read: IOPS=5339, BW=20.9MiB/s (21.9MB/s)(20.9MiB/1003msec) 00:08:52.292 slat (nsec): min=1340, max=4501.0k, avg=88387.17, stdev=403519.10 00:08:52.292 clat (usec): min=1241, max=23069, avg=11505.57, stdev=4068.52 00:08:52.292 lat (usec): min=2544, max=23077, avg=11593.96, stdev=4087.88 00:08:52.292 clat percentiles (usec): 00:08:52.292 | 1.00th=[ 3654], 5.00th=[ 6063], 10.00th=[ 6783], 20.00th=[ 7635], 00:08:52.292 | 30.00th=[ 8291], 40.00th=[ 9634], 50.00th=[10945], 60.00th=[12387], 00:08:52.293 | 70.00th=[14222], 80.00th=[15664], 90.00th=[17433], 95.00th=[17957], 00:08:52.293 | 99.00th=[20579], 99.50th=[21365], 99.90th=[22938], 99.95th=[22938], 00:08:52.293 | 99.99th=[23200] 00:08:52.293 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:08:52.293 slat (nsec): min=1805, max=5038.7k, avg=89909.89, stdev=378925.90 00:08:52.293 clat (usec): min=2974, max=22995, avg=11571.19, stdev=4080.59 00:08:52.293 lat (usec): min=3039, max=22998, avg=11661.10, stdev=4099.88 00:08:52.293 clat percentiles (usec): 00:08:52.293 | 1.00th=[ 4686], 5.00th=[ 5669], 10.00th=[ 6587], 20.00th=[ 7570], 00:08:52.293 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[10945], 60.00th=[12780], 00:08:52.293 | 70.00th=[14353], 80.00th=[16057], 90.00th=[17171], 95.00th=[17695], 00:08:52.293 | 99.00th=[19006], 99.50th=[19792], 99.90th=[22938], 99.95th=[22938], 00:08:52.293 | 99.99th=[22938] 00:08:52.293 bw ( KiB/s): min=19568, max=25488, per=21.78%, avg=22528.00, stdev=4186.07, samples=2 00:08:52.293 iops : min= 4892, max= 6372, avg=5632.00, stdev=1046.52, samples=2 00:08:52.293 lat (msec) : 2=0.01%, 4=0.67%, 10=43.17%, 20=55.34%, 50=0.81% 00:08:52.293 cpu : usr=2.40%, sys=4.89%, ctx=1621, majf=0, minf=1 00:08:52.293 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:52.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:52.293 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:52.293 issued rwts: total=5356,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:52.293 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:52.293 00:08:52.293 Run status group 0 (all jobs): 00:08:52.293 READ: bw=96.6MiB/s (101MB/s), 20.9MiB/s-25.9MiB/s (21.9MB/s-27.2MB/s), io=96.9MiB (102MB), run=1002-1003msec 00:08:52.293 WRITE: bw=101MiB/s (106MB/s), 21.9MiB/s-26.9MiB/s (23.0MB/s-28.2MB/s), io=101MiB (106MB), run=1002-1003msec 00:08:52.293 00:08:52.293 Disk stats (read/write): 00:08:52.293 nvme0n1: ios=5682/5967, merge=0/0, ticks=15614/15630, in_queue=31244, util=86.47% 00:08:52.293 nvme0n2: ios=5813/6144, merge=0/0, ticks=16595/16365, in_queue=32960, util=86.64% 00:08:52.293 nvme0n3: ios=5248/5632, merge=0/0, ticks=15575/15970, in_queue=31545, util=88.13% 00:08:52.293 nvme0n4: ios=4757/5120, merge=0/0, ticks=15040/16318, in_queue=31358, util=89.72% 00:08:52.293 21:05:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:52.293 21:05:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3787506 00:08:52.293 21:05:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:52.293 21:05:39 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:52.293 [global] 00:08:52.293 thread=1 00:08:52.293 invalidate=1 00:08:52.293 rw=read 00:08:52.293 time_based=1 00:08:52.293 runtime=10 00:08:52.293 ioengine=libaio 00:08:52.293 direct=1 00:08:52.293 bs=4096 00:08:52.293 iodepth=1 00:08:52.293 norandommap=1 00:08:52.293 numjobs=1 00:08:52.293 00:08:52.293 [job0] 00:08:52.293 filename=/dev/nvme0n1 00:08:52.293 [job1] 00:08:52.293 filename=/dev/nvme0n2 00:08:52.293 [job2] 00:08:52.293 filename=/dev/nvme0n3 00:08:52.293 [job3] 00:08:52.293 filename=/dev/nvme0n4 00:08:52.293 Could not set queue depth (nvme0n1) 00:08:52.293 Could not set queue depth (nvme0n2) 00:08:52.293 Could not set queue depth (nvme0n3) 00:08:52.293 Could not set queue depth (nvme0n4) 00:08:52.293 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.293 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.293 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.293 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:52.293 fio-3.35 00:08:52.293 Starting 4 threads 00:08:55.570 21:05:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:55.570 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=71958528, buflen=4096 00:08:55.570 fio: pid=3787674, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:55.570 21:05:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:55.570 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=83042304, buflen=4096 00:08:55.570 fio: pid=3787673, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:55.570 21:05:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:55.570 21:05:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:55.570 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=2625536, buflen=4096 00:08:55.570 fio: pid=3787671, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:55.570 21:05:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:55.570 21:05:42 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:55.829 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=27860992, buflen=4096 00:08:55.829 fio: pid=3787672, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:55.829 21:05:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:55.829 21:05:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:55.829 00:08:55.829 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3787671: Tue Nov 26 21:05:43 2024 00:08:55.829 read: IOPS=11.0k, BW=43.1MiB/s (45.2MB/s)(131MiB/3025msec) 00:08:55.829 slat (usec): min=5, max=18462, avg= 8.57, stdev=154.04 00:08:55.829 clat (usec): min=46, max=504, avg=80.72, stdev=16.93 00:08:55.829 lat (usec): min=52, max=18538, avg=89.29, stdev=155.00 00:08:55.829 clat percentiles (usec): 00:08:55.829 | 1.00th=[ 54], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 70], 00:08:55.829 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 77], 60.00th=[ 80], 00:08:55.829 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 106], 95.00th=[ 121], 00:08:55.829 | 99.00th=[ 128], 99.50th=[ 131], 99.90th=[ 165], 99.95th=[ 188], 00:08:55.829 | 99.99th=[ 367] 00:08:55.829 bw ( KiB/s): min=37304, max=50592, per=38.23%, avg=44612.80, stdev=4769.18, samples=5 00:08:55.829 iops : min= 9326, max=12648, avg=11153.20, stdev=1192.29, samples=5 00:08:55.829 lat (usec) : 50=0.28%, 100=88.90%, 250=10.78%, 500=0.03%, 750=0.01% 00:08:55.829 cpu : usr=2.38%, sys=9.39%, ctx=33415, majf=0, minf=1 00:08:55.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:55.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.829 issued rwts: total=33410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:55.829 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3787672: Tue Nov 26 21:05:43 2024 00:08:55.829 read: IOPS=7163, BW=28.0MiB/s (29.3MB/s)(90.6MiB/3237msec) 00:08:55.829 slat (usec): min=6, max=17773, avg=10.92, stdev=237.04 00:08:55.829 clat (usec): min=38, max=11450, avg=127.27, stdev=142.98 00:08:55.829 lat (usec): min=53, max=17857, avg=138.20, stdev=276.36 00:08:55.829 clat percentiles (usec): 00:08:55.829 | 1.00th=[ 52], 5.00th=[ 56], 10.00th=[ 60], 20.00th=[ 76], 00:08:55.829 | 30.00th=[ 112], 40.00th=[ 125], 50.00th=[ 137], 60.00th=[ 149], 00:08:55.829 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 174], 00:08:55.829 | 99.00th=[ 210], 99.50th=[ 233], 99.90th=[ 359], 99.95th=[ 441], 00:08:55.829 | 99.99th=[ 8586] 00:08:55.829 bw ( KiB/s): min=24152, max=32925, per=23.20%, avg=27071.50, stdev=3486.70, samples=6 00:08:55.829 iops : min= 6038, max= 8231, avg=6767.83, stdev=871.59, samples=6 00:08:55.829 lat (usec) : 50=0.38%, 100=27.80%, 250=71.46%, 500=0.31%, 750=0.01% 00:08:55.829 lat (msec) : 10=0.02%, 20=0.01% 00:08:55.829 cpu : usr=1.85%, sys=6.03%, ctx=23194, majf=0, minf=2 00:08:55.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:55.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.829 issued rwts: total=23187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:55.829 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3787673: Tue Nov 26 21:05:43 2024 00:08:55.829 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(79.2MiB/2840msec) 00:08:55.829 slat (usec): min=2, max=15884, avg= 9.08, stdev=157.38 00:08:55.829 clat (usec): min=67, max=553, avg=129.22, stdev=31.57 00:08:55.829 lat (usec): min=71, max=15988, avg=138.30, stdev=160.35 00:08:55.829 clat percentiles (usec): 00:08:55.829 | 1.00th=[ 79], 5.00th=[ 84], 10.00th=[ 87], 20.00th=[ 92], 00:08:55.829 | 30.00th=[ 104], 40.00th=[ 120], 50.00th=[ 139], 60.00th=[ 147], 00:08:55.829 | 70.00th=[ 151], 80.00th=[ 155], 90.00th=[ 161], 95.00th=[ 172], 00:08:55.829 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 231], 99.95th=[ 260], 00:08:55.829 | 99.99th=[ 367] 00:08:55.829 bw ( KiB/s): min=24600, max=33576, per=23.59%, avg=27529.60, stdev=3626.30, samples=5 00:08:55.829 iops : min= 6150, max= 8394, avg=6882.40, stdev=906.57, samples=5 00:08:55.829 lat (usec) : 100=28.16%, 250=71.78%, 500=0.05%, 750=0.01% 00:08:55.829 cpu : usr=2.47%, sys=5.74%, ctx=20278, majf=0, minf=2 00:08:55.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:55.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.829 issued rwts: total=20275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:55.829 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3787674: Tue Nov 26 21:05:43 2024 00:08:55.829 read: IOPS=6617, BW=25.8MiB/s (27.1MB/s)(68.6MiB/2655msec) 00:08:55.829 slat (nsec): min=6039, max=44380, avg=7159.21, stdev=975.77 00:08:55.829 clat (usec): min=70, max=603, avg=142.35, stdev=23.26 00:08:55.829 lat (usec): min=76, max=610, avg=149.51, stdev=23.34 00:08:55.829 clat percentiles (usec): 00:08:55.829 | 1.00th=[ 86], 5.00th=[ 101], 10.00th=[ 116], 20.00th=[ 124], 00:08:55.829 | 30.00th=[ 130], 40.00th=[ 139], 50.00th=[ 147], 60.00th=[ 151], 00:08:55.829 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 176], 00:08:55.829 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 231], 99.95th=[ 249], 00:08:55.829 | 99.99th=[ 482] 00:08:55.829 bw ( KiB/s): min=24752, max=30096, per=22.60%, avg=26377.60, stdev=2155.38, samples=5 00:08:55.829 iops : min= 6188, max= 7524, avg=6594.40, stdev=538.85, samples=5 00:08:55.829 lat (usec) : 100=4.67%, 250=95.28%, 500=0.04%, 750=0.01% 00:08:55.829 cpu : usr=1.81%, sys=5.58%, ctx=17570, majf=0, minf=1 00:08:55.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:55.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.829 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.829 issued rwts: total=17569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:55.829 00:08:55.829 Run status group 0 (all jobs): 00:08:55.829 READ: bw=114MiB/s (119MB/s), 25.8MiB/s-43.1MiB/s (27.1MB/s-45.2MB/s), io=369MiB (387MB), run=2655-3237msec 00:08:55.829 00:08:55.829 Disk stats (read/write): 00:08:55.829 nvme0n1: ios=31385/0, merge=0/0, ticks=2452/0, in_queue=2452, util=93.92% 00:08:55.829 nvme0n2: ios=20954/0, merge=0/0, ticks=2732/0, in_queue=2732, util=93.00% 00:08:55.829 nvme0n3: ios=20274/0, merge=0/0, ticks=2546/0, in_queue=2546, util=95.28% 00:08:55.829 nvme0n4: ios=17095/0, merge=0/0, ticks=2353/0, in_queue=2353, util=96.42% 00:08:56.088 21:05:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.088 21:05:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:56.347 21:05:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.347 21:05:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:56.347 21:05:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.347 21:05:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:56.605 21:05:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:56.605 21:05:43 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:56.863 21:05:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:56.863 21:05:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3787506 00:08:56.863 21:05:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:56.863 21:05:44 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:57.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.796 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:57.796 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:57.796 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:57.796 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.796 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:57.796 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:57.796 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:57.796 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:57.796 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:57.796 nvmf hotplug test: fio failed as expected 00:08:57.796 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:58.055 rmmod nvme_rdma 00:08:58.055 rmmod nvme_fabrics 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3784449 ']' 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3784449 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3784449 ']' 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3784449 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3784449 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3784449' 00:08:58.055 killing process with pid 3784449 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3784449 00:08:58.055 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3784449 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:58.313 00:08:58.313 real 0m24.811s 00:08:58.313 user 2m2.388s 00:08:58.313 sys 0m8.697s 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:58.313 ************************************ 00:08:58.313 END TEST nvmf_fio_target 00:08:58.313 ************************************ 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.313 ************************************ 00:08:58.313 START TEST nvmf_bdevio 00:08:58.313 ************************************ 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:08:58.313 * Looking for test storage... 00:08:58.313 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.313 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.572 --rc genhtml_branch_coverage=1 00:08:58.572 --rc genhtml_function_coverage=1 00:08:58.572 --rc genhtml_legend=1 00:08:58.572 --rc geninfo_all_blocks=1 00:08:58.572 --rc geninfo_unexecuted_blocks=1 00:08:58.572 00:08:58.572 ' 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.572 --rc genhtml_branch_coverage=1 00:08:58.572 --rc genhtml_function_coverage=1 00:08:58.572 --rc genhtml_legend=1 00:08:58.572 --rc geninfo_all_blocks=1 00:08:58.572 --rc geninfo_unexecuted_blocks=1 00:08:58.572 00:08:58.572 ' 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.572 --rc genhtml_branch_coverage=1 00:08:58.572 --rc genhtml_function_coverage=1 00:08:58.572 --rc genhtml_legend=1 00:08:58.572 --rc geninfo_all_blocks=1 00:08:58.572 --rc geninfo_unexecuted_blocks=1 00:08:58.572 00:08:58.572 ' 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.572 --rc genhtml_branch_coverage=1 00:08:58.572 --rc genhtml_function_coverage=1 00:08:58.572 --rc genhtml_legend=1 00:08:58.572 --rc geninfo_all_blocks=1 00:08:58.572 --rc geninfo_unexecuted_blocks=1 00:08:58.572 00:08:58.572 ' 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:58.572 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.573 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.573 21:05:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:03.909 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:03.910 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:03.910 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:03.910 Found net devices under 0000:18:00.0: mlx_0_0 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:03.910 Found net devices under 0000:18:00.1: mlx_0_1 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:03.910 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:04.170 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:04.170 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:04.170 altname enp24s0f0np0 00:09:04.170 altname ens785f0np0 00:09:04.170 inet 192.168.100.8/24 scope global mlx_0_0 00:09:04.170 valid_lft forever preferred_lft forever 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:04.170 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:04.170 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:04.170 altname enp24s0f1np1 00:09:04.170 altname ens785f1np1 00:09:04.170 inet 192.168.100.9/24 scope global mlx_0_1 00:09:04.170 valid_lft forever preferred_lft forever 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:04.170 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:04.171 192.168.100.9' 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:04.171 192.168.100.9' 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:04.171 192.168.100.9' 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3792026 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3792026 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3792026 ']' 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.171 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:04.171 [2024-11-26 21:05:51.531464] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:09:04.171 [2024-11-26 21:05:51.531511] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.171 [2024-11-26 21:05:51.591103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.430 [2024-11-26 21:05:51.629035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.430 [2024-11-26 21:05:51.629069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.430 [2024-11-26 21:05:51.629075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:04.430 [2024-11-26 21:05:51.629080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:04.430 [2024-11-26 21:05:51.629085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.430 [2024-11-26 21:05:51.630554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:04.430 [2024-11-26 21:05:51.630661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:04.430 [2024-11-26 21:05:51.630770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.430 [2024-11-26 21:05:51.630771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:04.430 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.430 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:04.430 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:04.430 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:04.430 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:04.430 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.430 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:04.430 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.430 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:04.430 [2024-11-26 21:05:51.795554] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ab9e30/0x1abe320) succeed. 00:09:04.430 [2024-11-26 21:05:51.803807] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1abb4c0/0x1aff9c0) succeed. 00:09:04.689 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.689 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:04.689 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.689 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:04.689 Malloc0 00:09:04.689 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:04.690 [2024-11-26 21:05:51.987396] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:04.690 { 00:09:04.690 "params": { 00:09:04.690 "name": "Nvme$subsystem", 00:09:04.690 "trtype": "$TEST_TRANSPORT", 00:09:04.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:04.690 "adrfam": "ipv4", 00:09:04.690 "trsvcid": "$NVMF_PORT", 00:09:04.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:04.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:04.690 "hdgst": ${hdgst:-false}, 00:09:04.690 "ddgst": ${ddgst:-false} 00:09:04.690 }, 00:09:04.690 "method": "bdev_nvme_attach_controller" 00:09:04.690 } 00:09:04.690 EOF 00:09:04.690 )") 00:09:04.690 21:05:51 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:04.690 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:04.690 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:04.690 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:04.690 "params": { 00:09:04.690 "name": "Nvme1", 00:09:04.690 "trtype": "rdma", 00:09:04.690 "traddr": "192.168.100.8", 00:09:04.690 "adrfam": "ipv4", 00:09:04.690 "trsvcid": "4420", 00:09:04.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:04.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:04.690 "hdgst": false, 00:09:04.690 "ddgst": false 00:09:04.690 }, 00:09:04.690 "method": "bdev_nvme_attach_controller" 00:09:04.690 }' 00:09:04.690 [2024-11-26 21:05:52.034284] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:09:04.690 [2024-11-26 21:05:52.034324] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3792286 ] 00:09:04.690 [2024-11-26 21:05:52.092561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.949 [2024-11-26 21:05:52.133421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.949 [2024-11-26 21:05:52.133514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.949 [2024-11-26 21:05:52.133516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.949 I/O targets: 00:09:04.949 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:04.949 00:09:04.949 00:09:04.949 CUnit - A unit testing framework for C - Version 2.1-3 00:09:04.949 http://cunit.sourceforge.net/ 00:09:04.949 00:09:04.949 00:09:04.949 Suite: bdevio tests on: Nvme1n1 00:09:04.949 Test: blockdev write read block ...passed 00:09:04.949 Test: blockdev write zeroes read block ...passed 00:09:04.949 Test: blockdev write zeroes read no split ...passed 00:09:04.949 Test: blockdev write zeroes read split ...passed 00:09:04.949 Test: blockdev write zeroes read split partial ...passed 00:09:04.949 Test: blockdev reset ...[2024-11-26 21:05:52.336358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:04.949 [2024-11-26 21:05:52.358249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:09:05.208 [2024-11-26 21:05:52.385926] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:05.208 passed 00:09:05.208 Test: blockdev write read 8 blocks ...passed 00:09:05.208 Test: blockdev write read size > 128k ...passed 00:09:05.208 Test: blockdev write read invalid size ...passed 00:09:05.208 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.208 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.208 Test: blockdev write read max offset ...passed 00:09:05.208 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.208 Test: blockdev writev readv 8 blocks ...passed 00:09:05.208 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.208 Test: blockdev writev readv block ...passed 00:09:05.208 Test: blockdev writev readv size > 128k ...passed 00:09:05.208 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.208 Test: blockdev comparev and writev ...[2024-11-26 21:05:52.388628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:05.208 [2024-11-26 21:05:52.388655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:05.208 [2024-11-26 21:05:52.388664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:05.208 [2024-11-26 21:05:52.388671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:05.208 [2024-11-26 21:05:52.388817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:05.208 [2024-11-26 21:05:52.388826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:05.208 [2024-11-26 21:05:52.388833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:05.208 [2024-11-26 21:05:52.388839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:05.208 [2024-11-26 21:05:52.388982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:05.208 [2024-11-26 21:05:52.388991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:05.208 [2024-11-26 21:05:52.388998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:05.208 [2024-11-26 21:05:52.389004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:05.208 [2024-11-26 21:05:52.389147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:05.208 [2024-11-26 21:05:52.389156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:05.208 [2024-11-26 21:05:52.389163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:05.208 [2024-11-26 21:05:52.389169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:05.208 passed 00:09:05.208 Test: blockdev nvme passthru rw ...passed 00:09:05.208 Test: blockdev nvme passthru vendor specific ...[2024-11-26 21:05:52.389399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:05.208 [2024-11-26 21:05:52.389409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:05.208 [2024-11-26 21:05:52.389454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:05.208 [2024-11-26 21:05:52.389462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:05.208 [2024-11-26 21:05:52.389507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:05.208 [2024-11-26 21:05:52.389514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:05.208 [2024-11-26 21:05:52.389550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:09:05.208 [2024-11-26 21:05:52.389558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:05.208 passed 00:09:05.208 Test: blockdev nvme admin passthru ...passed 00:09:05.208 Test: blockdev copy ...passed 00:09:05.208 00:09:05.208 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.208 suites 1 1 n/a 0 0 00:09:05.208 tests 23 23 23 0 0 00:09:05.208 asserts 152 152 152 0 n/a 00:09:05.208 00:09:05.208 Elapsed time = 0.171 seconds 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:05.208 rmmod nvme_rdma 00:09:05.208 rmmod nvme_fabrics 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3792026 ']' 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3792026 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3792026 ']' 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3792026 00:09:05.208 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:05.209 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.209 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3792026 00:09:05.467 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:05.468 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:05.468 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3792026' 00:09:05.468 killing process with pid 3792026 00:09:05.468 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3792026 00:09:05.468 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3792026 00:09:05.468 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.468 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:05.468 00:09:05.468 real 0m7.236s 00:09:05.468 user 0m7.554s 00:09:05.468 sys 0m4.825s 00:09:05.468 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.468 21:05:52 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:05.468 ************************************ 00:09:05.468 END TEST nvmf_bdevio 00:09:05.468 ************************************ 00:09:05.726 21:05:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:05.727 00:09:05.727 real 3m45.986s 00:09:05.727 user 10m25.585s 00:09:05.727 sys 1m17.511s 00:09:05.727 21:05:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:05.727 21:05:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:05.727 ************************************ 00:09:05.727 END TEST nvmf_target_core 00:09:05.727 ************************************ 00:09:05.727 21:05:52 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:09:05.727 21:05:52 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.727 21:05:52 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.727 21:05:52 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:05.727 ************************************ 00:09:05.727 START TEST nvmf_target_extra 00:09:05.727 ************************************ 00:09:05.727 21:05:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:09:05.727 * Looking for test storage... 00:09:05.727 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.727 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:05.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.986 --rc genhtml_branch_coverage=1 00:09:05.986 --rc genhtml_function_coverage=1 00:09:05.986 --rc genhtml_legend=1 00:09:05.986 --rc geninfo_all_blocks=1 00:09:05.986 --rc geninfo_unexecuted_blocks=1 00:09:05.986 00:09:05.986 ' 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:05.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.986 --rc genhtml_branch_coverage=1 00:09:05.986 --rc genhtml_function_coverage=1 00:09:05.986 --rc genhtml_legend=1 00:09:05.986 --rc geninfo_all_blocks=1 00:09:05.986 --rc geninfo_unexecuted_blocks=1 00:09:05.986 00:09:05.986 ' 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:05.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.986 --rc genhtml_branch_coverage=1 00:09:05.986 --rc genhtml_function_coverage=1 00:09:05.986 --rc genhtml_legend=1 00:09:05.986 --rc geninfo_all_blocks=1 00:09:05.986 --rc geninfo_unexecuted_blocks=1 00:09:05.986 00:09:05.986 ' 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:05.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.986 --rc genhtml_branch_coverage=1 00:09:05.986 --rc genhtml_function_coverage=1 00:09:05.986 --rc genhtml_legend=1 00:09:05.986 --rc geninfo_all_blocks=1 00:09:05.986 --rc geninfo_unexecuted_blocks=1 00:09:05.986 00:09:05.986 ' 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:05.986 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:05.986 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:05.987 ************************************ 00:09:05.987 START TEST nvmf_example 00:09:05.987 ************************************ 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:09:05.987 * Looking for test storage... 00:09:05.987 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.987 --rc genhtml_branch_coverage=1 00:09:05.987 --rc genhtml_function_coverage=1 00:09:05.987 --rc genhtml_legend=1 00:09:05.987 --rc geninfo_all_blocks=1 00:09:05.987 --rc geninfo_unexecuted_blocks=1 00:09:05.987 00:09:05.987 ' 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.987 --rc genhtml_branch_coverage=1 00:09:05.987 --rc genhtml_function_coverage=1 00:09:05.987 --rc genhtml_legend=1 00:09:05.987 --rc geninfo_all_blocks=1 00:09:05.987 --rc geninfo_unexecuted_blocks=1 00:09:05.987 00:09:05.987 ' 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.987 --rc genhtml_branch_coverage=1 00:09:05.987 --rc genhtml_function_coverage=1 00:09:05.987 --rc genhtml_legend=1 00:09:05.987 --rc geninfo_all_blocks=1 00:09:05.987 --rc geninfo_unexecuted_blocks=1 00:09:05.987 00:09:05.987 ' 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:05.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.987 --rc genhtml_branch_coverage=1 00:09:05.987 --rc genhtml_function_coverage=1 00:09:05.987 --rc genhtml_legend=1 00:09:05.987 --rc geninfo_all_blocks=1 00:09:05.987 --rc geninfo_unexecuted_blocks=1 00:09:05.987 00:09:05.987 ' 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:05.987 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.246 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.246 21:05:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:11.515 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:11.515 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:11.515 Found net devices under 0000:18:00.0: mlx_0_0 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:11.515 Found net devices under 0000:18:00.1: mlx_0_1 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:11.515 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:11.516 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:11.516 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:11.516 altname enp24s0f0np0 00:09:11.516 altname ens785f0np0 00:09:11.516 inet 192.168.100.8/24 scope global mlx_0_0 00:09:11.516 valid_lft forever preferred_lft forever 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:11.516 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:11.516 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:11.516 altname enp24s0f1np1 00:09:11.516 altname ens785f1np1 00:09:11.516 inet 192.168.100.9/24 scope global mlx_0_1 00:09:11.516 valid_lft forever preferred_lft forever 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:11.516 192.168.100.9' 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:11.516 192.168.100.9' 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:09:11.516 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:11.517 192.168.100.9' 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3795667 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3795667 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3795667 ']' 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:11.517 21:05:58 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:12.450 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.450 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:12.450 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:12.450 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.450 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:12.450 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:12.450 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.450 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:12.708 21:05:59 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:24.913 Initializing NVMe Controllers 00:09:24.913 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:24.913 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:24.913 Initialization complete. Launching workers. 00:09:24.913 ======================================================== 00:09:24.913 Latency(us) 00:09:24.913 Device Information : IOPS MiB/s Average min max 00:09:24.913 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26006.18 101.59 2460.83 601.59 12013.98 00:09:24.913 ======================================================== 00:09:24.913 Total : 26006.18 101.59 2460.83 601.59 12013.98 00:09:24.913 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:24.913 rmmod nvme_rdma 00:09:24.913 rmmod nvme_fabrics 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3795667 ']' 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3795667 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3795667 ']' 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3795667 00:09:24.913 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3795667 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3795667' 00:09:24.914 killing process with pid 3795667 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3795667 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3795667 00:09:24.914 nvmf threads initialize successfully 00:09:24.914 bdev subsystem init successfully 00:09:24.914 created a nvmf target service 00:09:24.914 create targets's poll groups done 00:09:24.914 all subsystems of target started 00:09:24.914 nvmf target is running 00:09:24.914 all subsystems of target stopped 00:09:24.914 destroy targets's poll groups done 00:09:24.914 destroyed the nvmf target service 00:09:24.914 bdev subsystem finish successfully 00:09:24.914 nvmf threads destroy successfully 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.914 00:09:24.914 real 0m18.326s 00:09:24.914 user 0m51.675s 00:09:24.914 sys 0m4.513s 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.914 ************************************ 00:09:24.914 END TEST nvmf_example 00:09:24.914 ************************************ 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:24.914 ************************************ 00:09:24.914 START TEST nvmf_filesystem 00:09:24.914 ************************************ 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:24.914 * Looking for test storage... 00:09:24.914 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.914 --rc genhtml_branch_coverage=1 00:09:24.914 --rc genhtml_function_coverage=1 00:09:24.914 --rc genhtml_legend=1 00:09:24.914 --rc geninfo_all_blocks=1 00:09:24.914 --rc geninfo_unexecuted_blocks=1 00:09:24.914 00:09:24.914 ' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.914 --rc genhtml_branch_coverage=1 00:09:24.914 --rc genhtml_function_coverage=1 00:09:24.914 --rc genhtml_legend=1 00:09:24.914 --rc geninfo_all_blocks=1 00:09:24.914 --rc geninfo_unexecuted_blocks=1 00:09:24.914 00:09:24.914 ' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.914 --rc genhtml_branch_coverage=1 00:09:24.914 --rc genhtml_function_coverage=1 00:09:24.914 --rc genhtml_legend=1 00:09:24.914 --rc geninfo_all_blocks=1 00:09:24.914 --rc geninfo_unexecuted_blocks=1 00:09:24.914 00:09:24.914 ' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.914 --rc genhtml_branch_coverage=1 00:09:24.914 --rc genhtml_function_coverage=1 00:09:24.914 --rc genhtml_legend=1 00:09:24.914 --rc geninfo_all_blocks=1 00:09:24.914 --rc geninfo_unexecuted_blocks=1 00:09:24.914 00:09:24.914 ' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:24.914 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:09:24.915 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:24.915 #define SPDK_CONFIG_H 00:09:24.915 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:24.915 #define SPDK_CONFIG_APPS 1 00:09:24.915 #define SPDK_CONFIG_ARCH native 00:09:24.915 #undef SPDK_CONFIG_ASAN 00:09:24.915 #undef SPDK_CONFIG_AVAHI 00:09:24.915 #undef SPDK_CONFIG_CET 00:09:24.915 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:24.915 #define SPDK_CONFIG_COVERAGE 1 00:09:24.915 #define SPDK_CONFIG_CROSS_PREFIX 00:09:24.915 #undef SPDK_CONFIG_CRYPTO 00:09:24.915 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:24.915 #undef SPDK_CONFIG_CUSTOMOCF 00:09:24.915 #undef SPDK_CONFIG_DAOS 00:09:24.915 #define SPDK_CONFIG_DAOS_DIR 00:09:24.915 #define SPDK_CONFIG_DEBUG 1 00:09:24.915 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:24.915 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:24.915 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:24.915 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:24.915 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:24.915 #undef SPDK_CONFIG_DPDK_UADK 00:09:24.915 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:24.915 #define SPDK_CONFIG_EXAMPLES 1 00:09:24.915 #undef SPDK_CONFIG_FC 00:09:24.915 #define SPDK_CONFIG_FC_PATH 00:09:24.915 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:24.915 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:24.915 #define SPDK_CONFIG_FSDEV 1 00:09:24.916 #undef SPDK_CONFIG_FUSE 00:09:24.916 #undef SPDK_CONFIG_FUZZER 00:09:24.916 #define SPDK_CONFIG_FUZZER_LIB 00:09:24.916 #undef SPDK_CONFIG_GOLANG 00:09:24.916 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:24.916 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:24.916 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:24.916 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:24.916 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:24.916 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:24.916 #undef SPDK_CONFIG_HAVE_LZ4 00:09:24.916 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:24.916 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:24.916 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:24.916 #define SPDK_CONFIG_IDXD 1 00:09:24.916 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:24.916 #undef SPDK_CONFIG_IPSEC_MB 00:09:24.916 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:24.916 #define SPDK_CONFIG_ISAL 1 00:09:24.916 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:24.916 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:24.916 #define SPDK_CONFIG_LIBDIR 00:09:24.916 #undef SPDK_CONFIG_LTO 00:09:24.916 #define SPDK_CONFIG_MAX_LCORES 128 00:09:24.916 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:24.916 #define SPDK_CONFIG_NVME_CUSE 1 00:09:24.916 #undef SPDK_CONFIG_OCF 00:09:24.916 #define SPDK_CONFIG_OCF_PATH 00:09:24.916 #define SPDK_CONFIG_OPENSSL_PATH 00:09:24.916 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:24.916 #define SPDK_CONFIG_PGO_DIR 00:09:24.916 #undef SPDK_CONFIG_PGO_USE 00:09:24.916 #define SPDK_CONFIG_PREFIX /usr/local 00:09:24.916 #undef SPDK_CONFIG_RAID5F 00:09:24.916 #undef SPDK_CONFIG_RBD 00:09:24.916 #define SPDK_CONFIG_RDMA 1 00:09:24.916 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:24.916 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:24.916 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:24.916 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:24.916 #define SPDK_CONFIG_SHARED 1 00:09:24.916 #undef SPDK_CONFIG_SMA 00:09:24.916 #define SPDK_CONFIG_TESTS 1 00:09:24.916 #undef SPDK_CONFIG_TSAN 00:09:24.916 #define SPDK_CONFIG_UBLK 1 00:09:24.916 #define SPDK_CONFIG_UBSAN 1 00:09:24.916 #undef SPDK_CONFIG_UNIT_TESTS 00:09:24.916 #undef SPDK_CONFIG_URING 00:09:24.916 #define SPDK_CONFIG_URING_PATH 00:09:24.916 #undef SPDK_CONFIG_URING_ZNS 00:09:24.916 #undef SPDK_CONFIG_USDT 00:09:24.916 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:24.916 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:24.916 #undef SPDK_CONFIG_VFIO_USER 00:09:24.916 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:24.916 #define SPDK_CONFIG_VHOST 1 00:09:24.916 #define SPDK_CONFIG_VIRTIO 1 00:09:24.916 #undef SPDK_CONFIG_VTUNE 00:09:24.916 #define SPDK_CONFIG_VTUNE_DIR 00:09:24.916 #define SPDK_CONFIG_WERROR 1 00:09:24.916 #define SPDK_CONFIG_WPDK_DIR 00:09:24.916 #undef SPDK_CONFIG_XNVME 00:09:24.916 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:24.916 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:24.917 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3798695 ]] 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3798695 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:24.918 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.0kCQzu 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0kCQzu/tests/target /tmp/spdk.0kCQzu 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67569995776 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=78631616512 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11061620736 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39301013504 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39315808256 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=14794752 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=15703183360 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=15726325760 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23142400 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39314522112 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39315808256 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1286144 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=7863148544 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=7863160832 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:24.919 * Looking for test storage... 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=67569995776 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13276213248 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:24.919 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.919 21:06:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.919 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.919 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.919 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.919 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.919 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.919 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.920 --rc genhtml_branch_coverage=1 00:09:24.920 --rc genhtml_function_coverage=1 00:09:24.920 --rc genhtml_legend=1 00:09:24.920 --rc geninfo_all_blocks=1 00:09:24.920 --rc geninfo_unexecuted_blocks=1 00:09:24.920 00:09:24.920 ' 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.920 --rc genhtml_branch_coverage=1 00:09:24.920 --rc genhtml_function_coverage=1 00:09:24.920 --rc genhtml_legend=1 00:09:24.920 --rc geninfo_all_blocks=1 00:09:24.920 --rc geninfo_unexecuted_blocks=1 00:09:24.920 00:09:24.920 ' 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.920 --rc genhtml_branch_coverage=1 00:09:24.920 --rc genhtml_function_coverage=1 00:09:24.920 --rc genhtml_legend=1 00:09:24.920 --rc geninfo_all_blocks=1 00:09:24.920 --rc geninfo_unexecuted_blocks=1 00:09:24.920 00:09:24.920 ' 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.920 --rc genhtml_branch_coverage=1 00:09:24.920 --rc genhtml_function_coverage=1 00:09:24.920 --rc genhtml_legend=1 00:09:24.920 --rc geninfo_all_blocks=1 00:09:24.920 --rc geninfo_unexecuted_blocks=1 00:09:24.920 00:09:24.920 ' 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:24.920 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.920 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.921 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.921 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:24.921 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:24.921 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:24.921 21:06:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:30.191 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:30.191 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.191 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:30.192 Found net devices under 0000:18:00.0: mlx_0_0 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:30.192 Found net devices under 0000:18:00.1: mlx_0_1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:30.192 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:30.192 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:30.192 altname enp24s0f0np0 00:09:30.192 altname ens785f0np0 00:09:30.192 inet 192.168.100.8/24 scope global mlx_0_0 00:09:30.192 valid_lft forever preferred_lft forever 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:30.192 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:30.192 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:30.192 altname enp24s0f1np1 00:09:30.192 altname ens785f1np1 00:09:30.192 inet 192.168.100.9/24 scope global mlx_0_1 00:09:30.192 valid_lft forever preferred_lft forever 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:30.192 192.168.100.9' 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:30.192 192.168.100.9' 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:09:30.192 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:30.193 192.168.100.9' 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.193 ************************************ 00:09:30.193 START TEST nvmf_filesystem_no_in_capsule 00:09:30.193 ************************************ 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3801996 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3801996 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3801996 ']' 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.193 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.451 [2024-11-26 21:06:17.635726] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:09:30.451 [2024-11-26 21:06:17.635769] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.451 [2024-11-26 21:06:17.694946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.451 [2024-11-26 21:06:17.735112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.451 [2024-11-26 21:06:17.735148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.451 [2024-11-26 21:06:17.735155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.451 [2024-11-26 21:06:17.735161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.451 [2024-11-26 21:06:17.735165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.451 [2024-11-26 21:06:17.736541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.451 [2024-11-26 21:06:17.736635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.451 [2024-11-26 21:06:17.736714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.451 [2024-11-26 21:06:17.736716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.451 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.451 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:30.451 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:30.451 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:30.452 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.452 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.452 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:30.452 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:30.452 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.452 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.452 [2024-11-26 21:06:17.871911] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:30.710 [2024-11-26 21:06:17.890508] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb63530/0xb67a20) succeed. 00:09:30.710 [2024-11-26 21:06:17.898685] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb64bc0/0xba90c0) succeed. 00:09:30.710 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.710 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:30.710 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.710 21:06:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.710 Malloc1 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.710 [2024-11-26 21:06:18.133000] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.710 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:30.968 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.968 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:30.968 { 00:09:30.968 "name": "Malloc1", 00:09:30.968 "aliases": [ 00:09:30.968 "f76f353c-9438-435d-9777-422e366312cb" 00:09:30.968 ], 00:09:30.968 "product_name": "Malloc disk", 00:09:30.968 "block_size": 512, 00:09:30.968 "num_blocks": 1048576, 00:09:30.968 "uuid": "f76f353c-9438-435d-9777-422e366312cb", 00:09:30.968 "assigned_rate_limits": { 00:09:30.969 "rw_ios_per_sec": 0, 00:09:30.969 "rw_mbytes_per_sec": 0, 00:09:30.969 "r_mbytes_per_sec": 0, 00:09:30.969 "w_mbytes_per_sec": 0 00:09:30.969 }, 00:09:30.969 "claimed": true, 00:09:30.969 "claim_type": "exclusive_write", 00:09:30.969 "zoned": false, 00:09:30.969 "supported_io_types": { 00:09:30.969 "read": true, 00:09:30.969 "write": true, 00:09:30.969 "unmap": true, 00:09:30.969 "flush": true, 00:09:30.969 "reset": true, 00:09:30.969 "nvme_admin": false, 00:09:30.969 "nvme_io": false, 00:09:30.969 "nvme_io_md": false, 00:09:30.969 "write_zeroes": true, 00:09:30.969 "zcopy": true, 00:09:30.969 "get_zone_info": false, 00:09:30.969 "zone_management": false, 00:09:30.969 "zone_append": false, 00:09:30.969 "compare": false, 00:09:30.969 "compare_and_write": false, 00:09:30.969 "abort": true, 00:09:30.969 "seek_hole": false, 00:09:30.969 "seek_data": false, 00:09:30.969 "copy": true, 00:09:30.969 "nvme_iov_md": false 00:09:30.969 }, 00:09:30.969 "memory_domains": [ 00:09:30.969 { 00:09:30.969 "dma_device_id": "system", 00:09:30.969 "dma_device_type": 1 00:09:30.969 }, 00:09:30.969 { 00:09:30.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.969 "dma_device_type": 2 00:09:30.969 } 00:09:30.969 ], 00:09:30.969 "driver_specific": {} 00:09:30.969 } 00:09:30.969 ]' 00:09:30.969 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:30.969 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:30.969 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:30.969 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:30.969 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:30.969 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:30.969 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:30.969 21:06:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:31.901 21:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.901 21:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:31.901 21:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.901 21:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:31.901 21:06:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:33.800 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:33.800 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:33.801 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:34.059 21:06:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.991 ************************************ 00:09:34.991 START TEST filesystem_ext4 00:09:34.991 ************************************ 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:34.991 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:35.249 mke2fs 1.47.0 (5-Feb-2023) 00:09:35.249 Discarding device blocks: 0/522240 done 00:09:35.249 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:35.249 Filesystem UUID: 1d676e28-39f5-49bf-b6fd-26c7a9a169b9 00:09:35.249 Superblock backups stored on blocks: 00:09:35.249 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:35.249 00:09:35.249 Allocating group tables: 0/64 done 00:09:35.249 Writing inode tables: 0/64 done 00:09:35.249 Creating journal (8192 blocks): done 00:09:35.249 Writing superblocks and filesystem accounting information: 0/64 done 00:09:35.249 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3801996 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:35.249 00:09:35.249 real 0m0.178s 00:09:35.249 user 0m0.028s 00:09:35.249 sys 0m0.057s 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:35.249 ************************************ 00:09:35.249 END TEST filesystem_ext4 00:09:35.249 ************************************ 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.249 ************************************ 00:09:35.249 START TEST filesystem_btrfs 00:09:35.249 ************************************ 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:35.249 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:35.250 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:35.250 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:35.250 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:35.507 btrfs-progs v6.8.1 00:09:35.507 See https://btrfs.readthedocs.io for more information. 00:09:35.507 00:09:35.507 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:35.507 NOTE: several default settings have changed in version 5.15, please make sure 00:09:35.507 this does not affect your deployments: 00:09:35.507 - DUP for metadata (-m dup) 00:09:35.507 - enabled no-holes (-O no-holes) 00:09:35.507 - enabled free-space-tree (-R free-space-tree) 00:09:35.507 00:09:35.507 Label: (null) 00:09:35.507 UUID: 9bd46a02-9f34-47ac-be84-2645f2e8bcf4 00:09:35.507 Node size: 16384 00:09:35.507 Sector size: 4096 (CPU page size: 4096) 00:09:35.507 Filesystem size: 510.00MiB 00:09:35.507 Block group profiles: 00:09:35.507 Data: single 8.00MiB 00:09:35.507 Metadata: DUP 32.00MiB 00:09:35.507 System: DUP 8.00MiB 00:09:35.507 SSD detected: yes 00:09:35.507 Zoned device: no 00:09:35.507 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:35.507 Checksum: crc32c 00:09:35.507 Number of devices: 1 00:09:35.507 Devices: 00:09:35.507 ID SIZE PATH 00:09:35.507 1 510.00MiB /dev/nvme0n1p1 00:09:35.507 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3801996 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:35.507 00:09:35.507 real 0m0.239s 00:09:35.507 user 0m0.026s 00:09:35.507 sys 0m0.114s 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.507 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:35.507 ************************************ 00:09:35.507 END TEST filesystem_btrfs 00:09:35.507 ************************************ 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:35.765 ************************************ 00:09:35.765 START TEST filesystem_xfs 00:09:35.765 ************************************ 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:35.765 21:06:22 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:35.765 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:35.765 = sectsz=512 attr=2, projid32bit=1 00:09:35.765 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:35.765 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:35.765 data = bsize=4096 blocks=130560, imaxpct=25 00:09:35.765 = sunit=0 swidth=0 blks 00:09:35.765 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:35.765 log =internal log bsize=4096 blocks=16384, version=2 00:09:35.765 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:35.765 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:35.765 Discarding blocks...Done. 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3801996 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:35.765 00:09:35.765 real 0m0.187s 00:09:35.765 user 0m0.022s 00:09:35.765 sys 0m0.069s 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.765 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:35.765 ************************************ 00:09:35.765 END TEST filesystem_xfs 00:09:35.765 ************************************ 00:09:36.024 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:36.024 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:36.024 21:06:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:36.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3801996 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3801996 ']' 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3801996 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:36.956 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.957 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3801996 00:09:36.957 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.957 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.957 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3801996' 00:09:36.957 killing process with pid 3801996 00:09:36.957 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3801996 00:09:36.957 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3801996 00:09:37.216 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:37.216 00:09:37.216 real 0m7.031s 00:09:37.216 user 0m27.441s 00:09:37.216 sys 0m1.020s 00:09:37.216 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.216 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.216 ************************************ 00:09:37.216 END TEST nvmf_filesystem_no_in_capsule 00:09:37.216 ************************************ 00:09:37.216 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:37.216 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:37.475 ************************************ 00:09:37.475 START TEST nvmf_filesystem_in_capsule 00:09:37.475 ************************************ 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3803465 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3803465 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3803465 ']' 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.475 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.475 [2024-11-26 21:06:24.734595] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:09:37.475 [2024-11-26 21:06:24.734636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.475 [2024-11-26 21:06:24.793752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.475 [2024-11-26 21:06:24.831086] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.475 [2024-11-26 21:06:24.831117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.475 [2024-11-26 21:06:24.831125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.475 [2024-11-26 21:06:24.831131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.475 [2024-11-26 21:06:24.831136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.475 [2024-11-26 21:06:24.832331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.475 [2024-11-26 21:06:24.832371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.475 [2024-11-26 21:06:24.832372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.475 [2024-11-26 21:06:24.832351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.734 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.734 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:37.734 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:37.734 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:37.734 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.734 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.734 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:37.734 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:09:37.734 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.734 21:06:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.734 [2024-11-26 21:06:24.990610] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x176b530/0x176fa20) succeed. 00:09:37.734 [2024-11-26 21:06:24.998740] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x176cbc0/0x17b10c0) succeed. 00:09:37.734 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.734 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:37.734 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.734 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 Malloc1 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 [2024-11-26 21:06:25.264269] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:37.992 { 00:09:37.992 "name": "Malloc1", 00:09:37.992 "aliases": [ 00:09:37.992 "d4d1a629-70bb-4330-a49e-685b0d3f85c8" 00:09:37.992 ], 00:09:37.992 "product_name": "Malloc disk", 00:09:37.992 "block_size": 512, 00:09:37.992 "num_blocks": 1048576, 00:09:37.992 "uuid": "d4d1a629-70bb-4330-a49e-685b0d3f85c8", 00:09:37.992 "assigned_rate_limits": { 00:09:37.992 "rw_ios_per_sec": 0, 00:09:37.992 "rw_mbytes_per_sec": 0, 00:09:37.992 "r_mbytes_per_sec": 0, 00:09:37.992 "w_mbytes_per_sec": 0 00:09:37.992 }, 00:09:37.992 "claimed": true, 00:09:37.992 "claim_type": "exclusive_write", 00:09:37.992 "zoned": false, 00:09:37.992 "supported_io_types": { 00:09:37.992 "read": true, 00:09:37.992 "write": true, 00:09:37.992 "unmap": true, 00:09:37.992 "flush": true, 00:09:37.992 "reset": true, 00:09:37.992 "nvme_admin": false, 00:09:37.992 "nvme_io": false, 00:09:37.992 "nvme_io_md": false, 00:09:37.992 "write_zeroes": true, 00:09:37.992 "zcopy": true, 00:09:37.992 "get_zone_info": false, 00:09:37.992 "zone_management": false, 00:09:37.992 "zone_append": false, 00:09:37.992 "compare": false, 00:09:37.992 "compare_and_write": false, 00:09:37.992 "abort": true, 00:09:37.992 "seek_hole": false, 00:09:37.992 "seek_data": false, 00:09:37.992 "copy": true, 00:09:37.992 "nvme_iov_md": false 00:09:37.992 }, 00:09:37.992 "memory_domains": [ 00:09:37.992 { 00:09:37.992 "dma_device_id": "system", 00:09:37.992 "dma_device_type": 1 00:09:37.992 }, 00:09:37.992 { 00:09:37.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.992 "dma_device_type": 2 00:09:37.992 } 00:09:37.992 ], 00:09:37.992 "driver_specific": {} 00:09:37.992 } 00:09:37.992 ]' 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:37.992 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:37.993 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:37.993 21:06:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:38.926 21:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:38.926 21:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:38.926 21:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:38.926 21:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:38.926 21:06:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:41.453 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:41.453 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:41.454 21:06:28 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.386 ************************************ 00:09:42.386 START TEST filesystem_in_capsule_ext4 00:09:42.386 ************************************ 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:42.386 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:42.387 mke2fs 1.47.0 (5-Feb-2023) 00:09:42.387 Discarding device blocks: 0/522240 done 00:09:42.387 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:42.387 Filesystem UUID: b6bf17b1-607c-4b06-b9f8-34977c700600 00:09:42.387 Superblock backups stored on blocks: 00:09:42.387 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:42.387 00:09:42.387 Allocating group tables: 0/64 done 00:09:42.387 Writing inode tables: 0/64 done 00:09:42.387 Creating journal (8192 blocks): done 00:09:42.387 Writing superblocks and filesystem accounting information: 0/64 done 00:09:42.387 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3803465 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:42.387 00:09:42.387 real 0m0.176s 00:09:42.387 user 0m0.022s 00:09:42.387 sys 0m0.063s 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:42.387 ************************************ 00:09:42.387 END TEST filesystem_in_capsule_ext4 00:09:42.387 ************************************ 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.387 ************************************ 00:09:42.387 START TEST filesystem_in_capsule_btrfs 00:09:42.387 ************************************ 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:42.387 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:42.645 btrfs-progs v6.8.1 00:09:42.645 See https://btrfs.readthedocs.io for more information. 00:09:42.645 00:09:42.645 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:42.645 NOTE: several default settings have changed in version 5.15, please make sure 00:09:42.645 this does not affect your deployments: 00:09:42.645 - DUP for metadata (-m dup) 00:09:42.645 - enabled no-holes (-O no-holes) 00:09:42.645 - enabled free-space-tree (-R free-space-tree) 00:09:42.645 00:09:42.645 Label: (null) 00:09:42.645 UUID: 27648ae5-0e87-4be3-941f-60d56450d65d 00:09:42.645 Node size: 16384 00:09:42.645 Sector size: 4096 (CPU page size: 4096) 00:09:42.645 Filesystem size: 510.00MiB 00:09:42.645 Block group profiles: 00:09:42.645 Data: single 8.00MiB 00:09:42.645 Metadata: DUP 32.00MiB 00:09:42.645 System: DUP 8.00MiB 00:09:42.645 SSD detected: yes 00:09:42.645 Zoned device: no 00:09:42.645 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:42.645 Checksum: crc32c 00:09:42.645 Number of devices: 1 00:09:42.645 Devices: 00:09:42.645 ID SIZE PATH 00:09:42.645 1 510.00MiB /dev/nvme0n1p1 00:09:42.645 00:09:42.645 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:42.645 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:42.645 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:42.645 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:42.645 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:42.645 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:42.645 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:42.646 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:42.646 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3803465 00:09:42.646 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:42.646 21:06:29 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:42.646 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:42.646 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:42.646 00:09:42.646 real 0m0.228s 00:09:42.646 user 0m0.033s 00:09:42.646 sys 0m0.103s 00:09:42.646 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.646 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:42.646 ************************************ 00:09:42.646 END TEST filesystem_in_capsule_btrfs 00:09:42.646 ************************************ 00:09:42.646 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:42.646 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:42.646 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.646 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:42.905 ************************************ 00:09:42.905 START TEST filesystem_in_capsule_xfs 00:09:42.905 ************************************ 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:42.905 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:42.905 = sectsz=512 attr=2, projid32bit=1 00:09:42.905 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:42.905 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:42.905 data = bsize=4096 blocks=130560, imaxpct=25 00:09:42.905 = sunit=0 swidth=0 blks 00:09:42.905 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:42.905 log =internal log bsize=4096 blocks=16384, version=2 00:09:42.905 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:42.905 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:42.905 Discarding blocks...Done. 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3803465 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:42.905 00:09:42.905 real 0m0.187s 00:09:42.905 user 0m0.028s 00:09:42.905 sys 0m0.063s 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:42.905 ************************************ 00:09:42.905 END TEST filesystem_in_capsule_xfs 00:09:42.905 ************************************ 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:42.905 21:06:30 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:44.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3803465 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3803465 ']' 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3803465 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3803465 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3803465' 00:09:44.277 killing process with pid 3803465 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3803465 00:09:44.277 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3803465 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:44.537 00:09:44.537 real 0m7.099s 00:09:44.537 user 0m27.670s 00:09:44.537 sys 0m1.002s 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:44.537 ************************************ 00:09:44.537 END TEST nvmf_filesystem_in_capsule 00:09:44.537 ************************************ 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:44.537 rmmod nvme_rdma 00:09:44.537 rmmod nvme_fabrics 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:44.537 00:09:44.537 real 0m20.237s 00:09:44.537 user 0m56.923s 00:09:44.537 sys 0m6.437s 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:44.537 ************************************ 00:09:44.537 END TEST nvmf_filesystem 00:09:44.537 ************************************ 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:44.537 ************************************ 00:09:44.537 START TEST nvmf_target_discovery 00:09:44.537 ************************************ 00:09:44.537 21:06:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:44.796 * Looking for test storage... 00:09:44.796 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.796 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:44.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.797 --rc genhtml_branch_coverage=1 00:09:44.797 --rc genhtml_function_coverage=1 00:09:44.797 --rc genhtml_legend=1 00:09:44.797 --rc geninfo_all_blocks=1 00:09:44.797 --rc geninfo_unexecuted_blocks=1 00:09:44.797 00:09:44.797 ' 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:44.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.797 --rc genhtml_branch_coverage=1 00:09:44.797 --rc genhtml_function_coverage=1 00:09:44.797 --rc genhtml_legend=1 00:09:44.797 --rc geninfo_all_blocks=1 00:09:44.797 --rc geninfo_unexecuted_blocks=1 00:09:44.797 00:09:44.797 ' 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:44.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.797 --rc genhtml_branch_coverage=1 00:09:44.797 --rc genhtml_function_coverage=1 00:09:44.797 --rc genhtml_legend=1 00:09:44.797 --rc geninfo_all_blocks=1 00:09:44.797 --rc geninfo_unexecuted_blocks=1 00:09:44.797 00:09:44.797 ' 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:44.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.797 --rc genhtml_branch_coverage=1 00:09:44.797 --rc genhtml_function_coverage=1 00:09:44.797 --rc genhtml_legend=1 00:09:44.797 --rc geninfo_all_blocks=1 00:09:44.797 --rc geninfo_unexecuted_blocks=1 00:09:44.797 00:09:44.797 ' 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.797 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.797 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:09:44.798 21:06:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.363 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.363 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:09:51.363 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:51.363 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:51.363 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:51.363 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:51.363 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:51.364 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:51.364 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:51.364 Found net devices under 0000:18:00.0: mlx_0_0 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:51.364 Found net devices under 0000:18:00.1: mlx_0_1 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:51.364 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:51.365 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:51.365 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:51.365 altname enp24s0f0np0 00:09:51.365 altname ens785f0np0 00:09:51.365 inet 192.168.100.8/24 scope global mlx_0_0 00:09:51.365 valid_lft forever preferred_lft forever 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:51.365 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:51.365 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:51.365 altname enp24s0f1np1 00:09:51.365 altname ens785f1np1 00:09:51.365 inet 192.168.100.9/24 scope global mlx_0_1 00:09:51.365 valid_lft forever preferred_lft forever 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:51.365 192.168.100.9' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:51.365 192.168.100.9' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:51.365 192.168.100.9' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3808276 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3808276 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3808276 ']' 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.365 21:06:37 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.365 [2024-11-26 21:06:37.979154] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:09:51.365 [2024-11-26 21:06:37.979212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.365 [2024-11-26 21:06:38.040129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.365 [2024-11-26 21:06:38.081667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.365 [2024-11-26 21:06:38.081704] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.365 [2024-11-26 21:06:38.081711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.365 [2024-11-26 21:06:38.081717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.365 [2024-11-26 21:06:38.081723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.365 [2024-11-26 21:06:38.083085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.365 [2024-11-26 21:06:38.083178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.365 [2024-11-26 21:06:38.083249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.365 [2024-11-26 21:06:38.083250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.365 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.365 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:09:51.365 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:51.365 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:51.365 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.365 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.365 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 [2024-11-26 21:06:38.244585] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x175e530/0x1762a20) succeed. 00:09:51.366 [2024-11-26 21:06:38.252783] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x175fbc0/0x17a40c0) succeed. 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 Null1 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 [2024-11-26 21:06:38.417794] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 Null2 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 Null3 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 Null4 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.366 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:09:51.366 00:09:51.366 Discovery Log Number of Records 6, Generation counter 6 00:09:51.366 =====Discovery Log Entry 0====== 00:09:51.366 trtype: rdma 00:09:51.366 adrfam: ipv4 00:09:51.366 subtype: current discovery subsystem 00:09:51.366 treq: not required 00:09:51.366 portid: 0 00:09:51.366 trsvcid: 4420 00:09:51.366 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:51.366 traddr: 192.168.100.8 00:09:51.366 eflags: explicit discovery connections, duplicate discovery information 00:09:51.366 rdma_prtype: not specified 00:09:51.366 rdma_qptype: connected 00:09:51.366 rdma_cms: rdma-cm 00:09:51.366 rdma_pkey: 0x0000 00:09:51.366 =====Discovery Log Entry 1====== 00:09:51.366 trtype: rdma 00:09:51.366 adrfam: ipv4 00:09:51.366 subtype: nvme subsystem 00:09:51.366 treq: not required 00:09:51.367 portid: 0 00:09:51.367 trsvcid: 4420 00:09:51.367 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:51.367 traddr: 192.168.100.8 00:09:51.367 eflags: none 00:09:51.367 rdma_prtype: not specified 00:09:51.367 rdma_qptype: connected 00:09:51.367 rdma_cms: rdma-cm 00:09:51.367 rdma_pkey: 0x0000 00:09:51.367 =====Discovery Log Entry 2====== 00:09:51.367 trtype: rdma 00:09:51.367 adrfam: ipv4 00:09:51.367 subtype: nvme subsystem 00:09:51.367 treq: not required 00:09:51.367 portid: 0 00:09:51.367 trsvcid: 4420 00:09:51.367 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:51.367 traddr: 192.168.100.8 00:09:51.367 eflags: none 00:09:51.367 rdma_prtype: not specified 00:09:51.367 rdma_qptype: connected 00:09:51.367 rdma_cms: rdma-cm 00:09:51.367 rdma_pkey: 0x0000 00:09:51.367 =====Discovery Log Entry 3====== 00:09:51.367 trtype: rdma 00:09:51.367 adrfam: ipv4 00:09:51.367 subtype: nvme subsystem 00:09:51.367 treq: not required 00:09:51.367 portid: 0 00:09:51.367 trsvcid: 4420 00:09:51.367 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:51.367 traddr: 192.168.100.8 00:09:51.367 eflags: none 00:09:51.367 rdma_prtype: not specified 00:09:51.367 rdma_qptype: connected 00:09:51.367 rdma_cms: rdma-cm 00:09:51.367 rdma_pkey: 0x0000 00:09:51.367 =====Discovery Log Entry 4====== 00:09:51.367 trtype: rdma 00:09:51.367 adrfam: ipv4 00:09:51.367 subtype: nvme subsystem 00:09:51.367 treq: not required 00:09:51.367 portid: 0 00:09:51.367 trsvcid: 4420 00:09:51.367 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:51.367 traddr: 192.168.100.8 00:09:51.367 eflags: none 00:09:51.367 rdma_prtype: not specified 00:09:51.367 rdma_qptype: connected 00:09:51.367 rdma_cms: rdma-cm 00:09:51.367 rdma_pkey: 0x0000 00:09:51.367 =====Discovery Log Entry 5====== 00:09:51.367 trtype: rdma 00:09:51.367 adrfam: ipv4 00:09:51.367 subtype: discovery subsystem referral 00:09:51.367 treq: not required 00:09:51.367 portid: 0 00:09:51.367 trsvcid: 4430 00:09:51.367 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:51.367 traddr: 192.168.100.8 00:09:51.367 eflags: none 00:09:51.367 rdma_prtype: unrecognized 00:09:51.367 rdma_qptype: unrecognized 00:09:51.367 rdma_cms: unrecognized 00:09:51.367 rdma_pkey: 0x0000 00:09:51.367 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:51.367 Perform nvmf subsystem discovery via RPC 00:09:51.367 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:51.367 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.367 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.367 [ 00:09:51.367 { 00:09:51.367 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:51.367 "subtype": "Discovery", 00:09:51.367 "listen_addresses": [ 00:09:51.367 { 00:09:51.367 "trtype": "RDMA", 00:09:51.367 "adrfam": "IPv4", 00:09:51.367 "traddr": "192.168.100.8", 00:09:51.367 "trsvcid": "4420" 00:09:51.367 } 00:09:51.367 ], 00:09:51.367 "allow_any_host": true, 00:09:51.367 "hosts": [] 00:09:51.367 }, 00:09:51.367 { 00:09:51.367 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.367 "subtype": "NVMe", 00:09:51.367 "listen_addresses": [ 00:09:51.367 { 00:09:51.367 "trtype": "RDMA", 00:09:51.367 "adrfam": "IPv4", 00:09:51.367 "traddr": "192.168.100.8", 00:09:51.367 "trsvcid": "4420" 00:09:51.367 } 00:09:51.367 ], 00:09:51.367 "allow_any_host": true, 00:09:51.367 "hosts": [], 00:09:51.367 "serial_number": "SPDK00000000000001", 00:09:51.367 "model_number": "SPDK bdev Controller", 00:09:51.367 "max_namespaces": 32, 00:09:51.367 "min_cntlid": 1, 00:09:51.367 "max_cntlid": 65519, 00:09:51.367 "namespaces": [ 00:09:51.367 { 00:09:51.367 "nsid": 1, 00:09:51.367 "bdev_name": "Null1", 00:09:51.367 "name": "Null1", 00:09:51.367 "nguid": "36D4D0EF37454CDF9B94EA45C560EC80", 00:09:51.367 "uuid": "36d4d0ef-3745-4cdf-9b94-ea45c560ec80" 00:09:51.367 } 00:09:51.367 ] 00:09:51.367 }, 00:09:51.367 { 00:09:51.367 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:51.367 "subtype": "NVMe", 00:09:51.367 "listen_addresses": [ 00:09:51.367 { 00:09:51.367 "trtype": "RDMA", 00:09:51.367 "adrfam": "IPv4", 00:09:51.367 "traddr": "192.168.100.8", 00:09:51.367 "trsvcid": "4420" 00:09:51.367 } 00:09:51.367 ], 00:09:51.367 "allow_any_host": true, 00:09:51.367 "hosts": [], 00:09:51.367 "serial_number": "SPDK00000000000002", 00:09:51.367 "model_number": "SPDK bdev Controller", 00:09:51.367 "max_namespaces": 32, 00:09:51.367 "min_cntlid": 1, 00:09:51.367 "max_cntlid": 65519, 00:09:51.367 "namespaces": [ 00:09:51.367 { 00:09:51.367 "nsid": 1, 00:09:51.367 "bdev_name": "Null2", 00:09:51.367 "name": "Null2", 00:09:51.367 "nguid": "562AABBF84544999A4D23CBFDFFD730E", 00:09:51.367 "uuid": "562aabbf-8454-4999-a4d2-3cbfdffd730e" 00:09:51.367 } 00:09:51.367 ] 00:09:51.367 }, 00:09:51.367 { 00:09:51.367 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:51.367 "subtype": "NVMe", 00:09:51.367 "listen_addresses": [ 00:09:51.367 { 00:09:51.367 "trtype": "RDMA", 00:09:51.367 "adrfam": "IPv4", 00:09:51.367 "traddr": "192.168.100.8", 00:09:51.367 "trsvcid": "4420" 00:09:51.367 } 00:09:51.367 ], 00:09:51.367 "allow_any_host": true, 00:09:51.367 "hosts": [], 00:09:51.367 "serial_number": "SPDK00000000000003", 00:09:51.367 "model_number": "SPDK bdev Controller", 00:09:51.367 "max_namespaces": 32, 00:09:51.367 "min_cntlid": 1, 00:09:51.367 "max_cntlid": 65519, 00:09:51.367 "namespaces": [ 00:09:51.367 { 00:09:51.367 "nsid": 1, 00:09:51.367 "bdev_name": "Null3", 00:09:51.367 "name": "Null3", 00:09:51.367 "nguid": "0645402DAD3C4D0999B4747D0966C8CF", 00:09:51.367 "uuid": "0645402d-ad3c-4d09-99b4-747d0966c8cf" 00:09:51.367 } 00:09:51.367 ] 00:09:51.367 }, 00:09:51.367 { 00:09:51.367 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:51.367 "subtype": "NVMe", 00:09:51.367 "listen_addresses": [ 00:09:51.367 { 00:09:51.367 "trtype": "RDMA", 00:09:51.367 "adrfam": "IPv4", 00:09:51.367 "traddr": "192.168.100.8", 00:09:51.367 "trsvcid": "4420" 00:09:51.368 } 00:09:51.368 ], 00:09:51.368 "allow_any_host": true, 00:09:51.368 "hosts": [], 00:09:51.368 "serial_number": "SPDK00000000000004", 00:09:51.368 "model_number": "SPDK bdev Controller", 00:09:51.368 "max_namespaces": 32, 00:09:51.368 "min_cntlid": 1, 00:09:51.368 "max_cntlid": 65519, 00:09:51.368 "namespaces": [ 00:09:51.368 { 00:09:51.368 "nsid": 1, 00:09:51.368 "bdev_name": "Null4", 00:09:51.368 "name": "Null4", 00:09:51.368 "nguid": "4DDF5D9FD3DB4332A1D4FC76DE99740E", 00:09:51.368 "uuid": "4ddf5d9f-d3db-4332-a1d4-fc76de99740e" 00:09:51.368 } 00:09:51.368 ] 00:09:51.368 } 00:09:51.368 ] 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.368 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:51.627 rmmod nvme_rdma 00:09:51.627 rmmod nvme_fabrics 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3808276 ']' 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3808276 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3808276 ']' 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3808276 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3808276 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3808276' 00:09:51.627 killing process with pid 3808276 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3808276 00:09:51.627 21:06:38 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3808276 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:51.886 00:09:51.886 real 0m7.188s 00:09:51.886 user 0m5.904s 00:09:51.886 sys 0m4.743s 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:51.886 ************************************ 00:09:51.886 END TEST nvmf_target_discovery 00:09:51.886 ************************************ 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:51.886 ************************************ 00:09:51.886 START TEST nvmf_referrals 00:09:51.886 ************************************ 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:51.886 * Looking for test storage... 00:09:51.886 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.886 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:09:52.145 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:52.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.146 --rc genhtml_branch_coverage=1 00:09:52.146 --rc genhtml_function_coverage=1 00:09:52.146 --rc genhtml_legend=1 00:09:52.146 --rc geninfo_all_blocks=1 00:09:52.146 --rc geninfo_unexecuted_blocks=1 00:09:52.146 00:09:52.146 ' 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:52.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.146 --rc genhtml_branch_coverage=1 00:09:52.146 --rc genhtml_function_coverage=1 00:09:52.146 --rc genhtml_legend=1 00:09:52.146 --rc geninfo_all_blocks=1 00:09:52.146 --rc geninfo_unexecuted_blocks=1 00:09:52.146 00:09:52.146 ' 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:52.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.146 --rc genhtml_branch_coverage=1 00:09:52.146 --rc genhtml_function_coverage=1 00:09:52.146 --rc genhtml_legend=1 00:09:52.146 --rc geninfo_all_blocks=1 00:09:52.146 --rc geninfo_unexecuted_blocks=1 00:09:52.146 00:09:52.146 ' 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:52.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.146 --rc genhtml_branch_coverage=1 00:09:52.146 --rc genhtml_function_coverage=1 00:09:52.146 --rc genhtml_legend=1 00:09:52.146 --rc geninfo_all_blocks=1 00:09:52.146 --rc geninfo_unexecuted_blocks=1 00:09:52.146 00:09:52.146 ' 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:52.146 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.147 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.147 21:06:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:09:57.415 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:09:57.415 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:57.415 Found net devices under 0000:18:00.0: mlx_0_0 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:57.415 Found net devices under 0000:18:00.1: mlx_0_1 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:57.415 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:57.416 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:57.416 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:09:57.416 altname enp24s0f0np0 00:09:57.416 altname ens785f0np0 00:09:57.416 inet 192.168.100.8/24 scope global mlx_0_0 00:09:57.416 valid_lft forever preferred_lft forever 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:57.416 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:57.416 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:09:57.416 altname enp24s0f1np1 00:09:57.416 altname ens785f1np1 00:09:57.416 inet 192.168.100.9/24 scope global mlx_0_1 00:09:57.416 valid_lft forever preferred_lft forever 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:57.416 192.168.100.9' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:57.416 192.168.100.9' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:57.416 192.168.100.9' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3811767 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3811767 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3811767 ']' 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.416 [2024-11-26 21:06:44.585462] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:09:57.416 [2024-11-26 21:06:44.585506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.416 [2024-11-26 21:06:44.642213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:57.416 [2024-11-26 21:06:44.679321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:57.416 [2024-11-26 21:06:44.679359] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:57.416 [2024-11-26 21:06:44.679368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:57.416 [2024-11-26 21:06:44.679373] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:57.416 [2024-11-26 21:06:44.679377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:57.416 [2024-11-26 21:06:44.680777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:57.416 [2024-11-26 21:06:44.680871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:57.416 [2024-11-26 21:06:44.680948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:57.416 [2024-11-26 21:06:44.680950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:57.416 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:57.417 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.417 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:57.417 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:57.417 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.417 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.417 [2024-11-26 21:06:44.842509] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16de530/0x16e2a20) succeed. 00:09:57.676 [2024-11-26 21:06:44.850703] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16dfbc0/0x17240c0) succeed. 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.676 [2024-11-26 21:06:44.984334] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.676 21:06:44 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:57.676 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:57.934 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:58.192 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:58.451 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:58.709 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:58.709 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:58.709 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:58.709 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:58.709 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:58.709 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:58.709 21:06:45 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:58.709 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:58.967 rmmod nvme_rdma 00:09:58.967 rmmod nvme_fabrics 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3811767 ']' 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3811767 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3811767 ']' 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3811767 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3811767 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3811767' 00:09:58.967 killing process with pid 3811767 00:09:58.967 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3811767 00:09:59.225 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3811767 00:09:59.225 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.225 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:59.225 00:09:59.225 real 0m7.426s 00:09:59.225 user 0m9.827s 00:09:59.225 sys 0m4.623s 00:09:59.225 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.225 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:59.225 ************************************ 00:09:59.225 END TEST nvmf_referrals 00:09:59.225 ************************************ 00:09:59.225 21:06:46 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:59.225 21:06:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.225 21:06:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.225 21:06:46 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:59.485 ************************************ 00:09:59.485 START TEST nvmf_connect_disconnect 00:09:59.485 ************************************ 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:59.485 * Looking for test storage... 00:09:59.485 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:59.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.485 --rc genhtml_branch_coverage=1 00:09:59.485 --rc genhtml_function_coverage=1 00:09:59.485 --rc genhtml_legend=1 00:09:59.485 --rc geninfo_all_blocks=1 00:09:59.485 --rc geninfo_unexecuted_blocks=1 00:09:59.485 00:09:59.485 ' 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:59.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.485 --rc genhtml_branch_coverage=1 00:09:59.485 --rc genhtml_function_coverage=1 00:09:59.485 --rc genhtml_legend=1 00:09:59.485 --rc geninfo_all_blocks=1 00:09:59.485 --rc geninfo_unexecuted_blocks=1 00:09:59.485 00:09:59.485 ' 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:59.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.485 --rc genhtml_branch_coverage=1 00:09:59.485 --rc genhtml_function_coverage=1 00:09:59.485 --rc genhtml_legend=1 00:09:59.485 --rc geninfo_all_blocks=1 00:09:59.485 --rc geninfo_unexecuted_blocks=1 00:09:59.485 00:09:59.485 ' 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:59.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:59.485 --rc genhtml_branch_coverage=1 00:09:59.485 --rc genhtml_function_coverage=1 00:09:59.485 --rc genhtml_legend=1 00:09:59.485 --rc geninfo_all_blocks=1 00:09:59.485 --rc geninfo_unexecuted_blocks=1 00:09:59.485 00:09:59.485 ' 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.485 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:59.486 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:09:59.486 21:06:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:04.756 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:04.756 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:04.756 Found net devices under 0000:18:00.0: mlx_0_0 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:04.756 Found net devices under 0000:18:00.1: mlx_0_1 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:10:04.756 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:04.757 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:04.757 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:04.757 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:04.757 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:04.757 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:05.017 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:05.017 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:05.017 altname enp24s0f0np0 00:10:05.017 altname ens785f0np0 00:10:05.017 inet 192.168.100.8/24 scope global mlx_0_0 00:10:05.017 valid_lft forever preferred_lft forever 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:05.017 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:05.017 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:05.017 altname enp24s0f1np1 00:10:05.017 altname ens785f1np1 00:10:05.017 inet 192.168.100.9/24 scope global mlx_0_1 00:10:05.017 valid_lft forever preferred_lft forever 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:05.017 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:05.018 192.168.100.9' 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:05.018 192.168.100.9' 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:05.018 192.168.100.9' 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3815427 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3815427 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3815427 ']' 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.018 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:05.018 [2024-11-26 21:06:52.408609] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:10:05.018 [2024-11-26 21:06:52.408658] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.277 [2024-11-26 21:06:52.468280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:05.277 [2024-11-26 21:06:52.509871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:05.277 [2024-11-26 21:06:52.509906] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:05.277 [2024-11-26 21:06:52.509912] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:05.277 [2024-11-26 21:06:52.509918] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:05.277 [2024-11-26 21:06:52.509926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:05.277 [2024-11-26 21:06:52.511296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.277 [2024-11-26 21:06:52.511317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:05.277 [2024-11-26 21:06:52.511402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:05.277 [2024-11-26 21:06:52.511404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.277 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.277 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:05.277 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:05.277 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.277 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:05.277 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:05.277 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:05.277 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.277 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:05.277 [2024-11-26 21:06:52.646398] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:05.277 [2024-11-26 21:06:52.665177] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x203a530/0x203ea20) succeed. 00:10:05.277 [2024-11-26 21:06:52.673367] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x203bbc0/0x20800c0) succeed. 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:05.536 [2024-11-26 21:06:52.811211] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:05.536 21:06:52 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:09.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:25.657 rmmod nvme_rdma 00:10:25.657 rmmod nvme_fabrics 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3815427 ']' 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3815427 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3815427 ']' 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3815427 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3815427 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3815427' 00:10:25.657 killing process with pid 3815427 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3815427 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3815427 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:25.657 00:10:25.657 real 0m26.299s 00:10:25.657 user 1m23.050s 00:10:25.657 sys 0m5.028s 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.657 21:07:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:25.657 ************************************ 00:10:25.657 END TEST nvmf_connect_disconnect 00:10:25.657 ************************************ 00:10:25.657 21:07:13 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:25.657 21:07:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:25.657 21:07:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.657 21:07:13 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:25.657 ************************************ 00:10:25.657 START TEST nvmf_multitarget 00:10:25.657 ************************************ 00:10:25.657 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:25.916 * Looking for test storage... 00:10:25.916 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:25.916 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:25.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.917 --rc genhtml_branch_coverage=1 00:10:25.917 --rc genhtml_function_coverage=1 00:10:25.917 --rc genhtml_legend=1 00:10:25.917 --rc geninfo_all_blocks=1 00:10:25.917 --rc geninfo_unexecuted_blocks=1 00:10:25.917 00:10:25.917 ' 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:25.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.917 --rc genhtml_branch_coverage=1 00:10:25.917 --rc genhtml_function_coverage=1 00:10:25.917 --rc genhtml_legend=1 00:10:25.917 --rc geninfo_all_blocks=1 00:10:25.917 --rc geninfo_unexecuted_blocks=1 00:10:25.917 00:10:25.917 ' 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:25.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.917 --rc genhtml_branch_coverage=1 00:10:25.917 --rc genhtml_function_coverage=1 00:10:25.917 --rc genhtml_legend=1 00:10:25.917 --rc geninfo_all_blocks=1 00:10:25.917 --rc geninfo_unexecuted_blocks=1 00:10:25.917 00:10:25.917 ' 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:25.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.917 --rc genhtml_branch_coverage=1 00:10:25.917 --rc genhtml_function_coverage=1 00:10:25.917 --rc genhtml_legend=1 00:10:25.917 --rc geninfo_all_blocks=1 00:10:25.917 --rc geninfo_unexecuted_blocks=1 00:10:25.917 00:10:25.917 ' 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.917 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:25.917 21:07:13 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:31.193 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:31.193 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:31.193 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:31.194 Found net devices under 0000:18:00.0: mlx_0_0 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:31.194 Found net devices under 0000:18:00.1: mlx_0_1 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:31.194 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:31.194 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:31.194 altname enp24s0f0np0 00:10:31.194 altname ens785f0np0 00:10:31.194 inet 192.168.100.8/24 scope global mlx_0_0 00:10:31.194 valid_lft forever preferred_lft forever 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:31.194 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:31.194 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:31.194 altname enp24s0f1np1 00:10:31.194 altname ens785f1np1 00:10:31.194 inet 192.168.100.9/24 scope global mlx_0_1 00:10:31.194 valid_lft forever preferred_lft forever 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:31.194 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:31.454 192.168.100.9' 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:31.454 192.168.100.9' 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:31.454 192.168.100.9' 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:31.454 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:31.455 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:31.455 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:31.455 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3822539 00:10:31.455 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3822539 00:10:31.455 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3822539 ']' 00:10:31.455 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.455 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:31.455 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.455 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:31.455 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:31.455 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:31.455 [2024-11-26 21:07:18.731007] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:10:31.455 [2024-11-26 21:07:18.731052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.455 [2024-11-26 21:07:18.790469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.455 [2024-11-26 21:07:18.829797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.455 [2024-11-26 21:07:18.829833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.455 [2024-11-26 21:07:18.829841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.455 [2024-11-26 21:07:18.829847] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.455 [2024-11-26 21:07:18.829852] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.455 [2024-11-26 21:07:18.831261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.455 [2024-11-26 21:07:18.831324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.455 [2024-11-26 21:07:18.831411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.455 [2024-11-26 21:07:18.831413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.714 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.714 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:31.714 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:31.714 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:31.714 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:31.714 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:31.714 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:31.714 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:31.714 21:07:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:31.714 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:31.714 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:31.714 "nvmf_tgt_1" 00:10:31.973 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:31.973 "nvmf_tgt_2" 00:10:31.973 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:31.973 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:31.973 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:31.974 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:32.234 true 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:32.234 true 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:32.234 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:32.234 rmmod nvme_rdma 00:10:32.493 rmmod nvme_fabrics 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3822539 ']' 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3822539 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3822539 ']' 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3822539 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3822539 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3822539' 00:10:32.493 killing process with pid 3822539 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3822539 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3822539 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:32.493 00:10:32.493 real 0m6.855s 00:10:32.493 user 0m6.559s 00:10:32.493 sys 0m4.492s 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.493 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:32.493 ************************************ 00:10:32.493 END TEST nvmf_multitarget 00:10:32.494 ************************************ 00:10:32.755 21:07:19 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:32.755 21:07:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.755 21:07:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.755 21:07:19 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:32.755 ************************************ 00:10:32.755 START TEST nvmf_rpc 00:10:32.755 ************************************ 00:10:32.755 21:07:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:32.755 * Looking for test storage... 00:10:32.755 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:32.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.755 --rc genhtml_branch_coverage=1 00:10:32.755 --rc genhtml_function_coverage=1 00:10:32.755 --rc genhtml_legend=1 00:10:32.755 --rc geninfo_all_blocks=1 00:10:32.755 --rc geninfo_unexecuted_blocks=1 00:10:32.755 00:10:32.755 ' 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:32.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.755 --rc genhtml_branch_coverage=1 00:10:32.755 --rc genhtml_function_coverage=1 00:10:32.755 --rc genhtml_legend=1 00:10:32.755 --rc geninfo_all_blocks=1 00:10:32.755 --rc geninfo_unexecuted_blocks=1 00:10:32.755 00:10:32.755 ' 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:32.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.755 --rc genhtml_branch_coverage=1 00:10:32.755 --rc genhtml_function_coverage=1 00:10:32.755 --rc genhtml_legend=1 00:10:32.755 --rc geninfo_all_blocks=1 00:10:32.755 --rc geninfo_unexecuted_blocks=1 00:10:32.755 00:10:32.755 ' 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:32.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.755 --rc genhtml_branch_coverage=1 00:10:32.755 --rc genhtml_function_coverage=1 00:10:32.755 --rc genhtml_legend=1 00:10:32.755 --rc geninfo_all_blocks=1 00:10:32.755 --rc geninfo_unexecuted_blocks=1 00:10:32.755 00:10:32.755 ' 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.755 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.756 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:32.756 21:07:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:10:39.332 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:10:39.332 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:39.332 Found net devices under 0000:18:00.0: mlx_0_0 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:39.332 Found net devices under 0000:18:00.1: mlx_0_1 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:39.332 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:39.333 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:39.333 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:10:39.333 altname enp24s0f0np0 00:10:39.333 altname ens785f0np0 00:10:39.333 inet 192.168.100.8/24 scope global mlx_0_0 00:10:39.333 valid_lft forever preferred_lft forever 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:39.333 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:39.333 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:10:39.333 altname enp24s0f1np1 00:10:39.333 altname ens785f1np1 00:10:39.333 inet 192.168.100.9/24 scope global mlx_0_1 00:10:39.333 valid_lft forever preferred_lft forever 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:39.333 192.168.100.9' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:39.333 192.168.100.9' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:39.333 192.168.100.9' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3825978 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3825978 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3825978 ']' 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.333 [2024-11-26 21:07:25.769703] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:10:39.333 [2024-11-26 21:07:25.769750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.333 [2024-11-26 21:07:25.829250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.333 [2024-11-26 21:07:25.867935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.333 [2024-11-26 21:07:25.867971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.333 [2024-11-26 21:07:25.867978] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.333 [2024-11-26 21:07:25.867984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.333 [2024-11-26 21:07:25.867989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.333 [2024-11-26 21:07:25.869212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.333 [2024-11-26 21:07:25.869310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.333 [2024-11-26 21:07:25.869333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.333 [2024-11-26 21:07:25.869334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.333 21:07:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.333 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.333 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:39.333 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.333 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.333 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.333 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:39.333 "tick_rate": 2700000000, 00:10:39.333 "poll_groups": [ 00:10:39.333 { 00:10:39.333 "name": "nvmf_tgt_poll_group_000", 00:10:39.333 "admin_qpairs": 0, 00:10:39.333 "io_qpairs": 0, 00:10:39.333 "current_admin_qpairs": 0, 00:10:39.333 "current_io_qpairs": 0, 00:10:39.333 "pending_bdev_io": 0, 00:10:39.333 "completed_nvme_io": 0, 00:10:39.333 "transports": [] 00:10:39.333 }, 00:10:39.333 { 00:10:39.333 "name": "nvmf_tgt_poll_group_001", 00:10:39.334 "admin_qpairs": 0, 00:10:39.334 "io_qpairs": 0, 00:10:39.334 "current_admin_qpairs": 0, 00:10:39.334 "current_io_qpairs": 0, 00:10:39.334 "pending_bdev_io": 0, 00:10:39.334 "completed_nvme_io": 0, 00:10:39.334 "transports": [] 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "nvmf_tgt_poll_group_002", 00:10:39.334 "admin_qpairs": 0, 00:10:39.334 "io_qpairs": 0, 00:10:39.334 "current_admin_qpairs": 0, 00:10:39.334 "current_io_qpairs": 0, 00:10:39.334 "pending_bdev_io": 0, 00:10:39.334 "completed_nvme_io": 0, 00:10:39.334 "transports": [] 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "nvmf_tgt_poll_group_003", 00:10:39.334 "admin_qpairs": 0, 00:10:39.334 "io_qpairs": 0, 00:10:39.334 "current_admin_qpairs": 0, 00:10:39.334 "current_io_qpairs": 0, 00:10:39.334 "pending_bdev_io": 0, 00:10:39.334 "completed_nvme_io": 0, 00:10:39.334 "transports": [] 00:10:39.334 } 00:10:39.334 ] 00:10:39.334 }' 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.334 [2024-11-26 21:07:26.135863] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21a9590/0x21ada80) succeed. 00:10:39.334 [2024-11-26 21:07:26.144072] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21aac20/0x21ef120) succeed. 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:39.334 "tick_rate": 2700000000, 00:10:39.334 "poll_groups": [ 00:10:39.334 { 00:10:39.334 "name": "nvmf_tgt_poll_group_000", 00:10:39.334 "admin_qpairs": 0, 00:10:39.334 "io_qpairs": 0, 00:10:39.334 "current_admin_qpairs": 0, 00:10:39.334 "current_io_qpairs": 0, 00:10:39.334 "pending_bdev_io": 0, 00:10:39.334 "completed_nvme_io": 0, 00:10:39.334 "transports": [ 00:10:39.334 { 00:10:39.334 "trtype": "RDMA", 00:10:39.334 "pending_data_buffer": 0, 00:10:39.334 "devices": [ 00:10:39.334 { 00:10:39.334 "name": "mlx5_0", 00:10:39.334 "polls": 14809, 00:10:39.334 "idle_polls": 14809, 00:10:39.334 "completions": 0, 00:10:39.334 "requests": 0, 00:10:39.334 "request_latency": 0, 00:10:39.334 "pending_free_request": 0, 00:10:39.334 "pending_rdma_read": 0, 00:10:39.334 "pending_rdma_write": 0, 00:10:39.334 "pending_rdma_send": 0, 00:10:39.334 "total_send_wrs": 0, 00:10:39.334 "send_doorbell_updates": 0, 00:10:39.334 "total_recv_wrs": 4096, 00:10:39.334 "recv_doorbell_updates": 1 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "mlx5_1", 00:10:39.334 "polls": 14809, 00:10:39.334 "idle_polls": 14809, 00:10:39.334 "completions": 0, 00:10:39.334 "requests": 0, 00:10:39.334 "request_latency": 0, 00:10:39.334 "pending_free_request": 0, 00:10:39.334 "pending_rdma_read": 0, 00:10:39.334 "pending_rdma_write": 0, 00:10:39.334 "pending_rdma_send": 0, 00:10:39.334 "total_send_wrs": 0, 00:10:39.334 "send_doorbell_updates": 0, 00:10:39.334 "total_recv_wrs": 4096, 00:10:39.334 "recv_doorbell_updates": 1 00:10:39.334 } 00:10:39.334 ] 00:10:39.334 } 00:10:39.334 ] 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "nvmf_tgt_poll_group_001", 00:10:39.334 "admin_qpairs": 0, 00:10:39.334 "io_qpairs": 0, 00:10:39.334 "current_admin_qpairs": 0, 00:10:39.334 "current_io_qpairs": 0, 00:10:39.334 "pending_bdev_io": 0, 00:10:39.334 "completed_nvme_io": 0, 00:10:39.334 "transports": [ 00:10:39.334 { 00:10:39.334 "trtype": "RDMA", 00:10:39.334 "pending_data_buffer": 0, 00:10:39.334 "devices": [ 00:10:39.334 { 00:10:39.334 "name": "mlx5_0", 00:10:39.334 "polls": 9370, 00:10:39.334 "idle_polls": 9370, 00:10:39.334 "completions": 0, 00:10:39.334 "requests": 0, 00:10:39.334 "request_latency": 0, 00:10:39.334 "pending_free_request": 0, 00:10:39.334 "pending_rdma_read": 0, 00:10:39.334 "pending_rdma_write": 0, 00:10:39.334 "pending_rdma_send": 0, 00:10:39.334 "total_send_wrs": 0, 00:10:39.334 "send_doorbell_updates": 0, 00:10:39.334 "total_recv_wrs": 4096, 00:10:39.334 "recv_doorbell_updates": 1 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "mlx5_1", 00:10:39.334 "polls": 9370, 00:10:39.334 "idle_polls": 9370, 00:10:39.334 "completions": 0, 00:10:39.334 "requests": 0, 00:10:39.334 "request_latency": 0, 00:10:39.334 "pending_free_request": 0, 00:10:39.334 "pending_rdma_read": 0, 00:10:39.334 "pending_rdma_write": 0, 00:10:39.334 "pending_rdma_send": 0, 00:10:39.334 "total_send_wrs": 0, 00:10:39.334 "send_doorbell_updates": 0, 00:10:39.334 "total_recv_wrs": 4096, 00:10:39.334 "recv_doorbell_updates": 1 00:10:39.334 } 00:10:39.334 ] 00:10:39.334 } 00:10:39.334 ] 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "nvmf_tgt_poll_group_002", 00:10:39.334 "admin_qpairs": 0, 00:10:39.334 "io_qpairs": 0, 00:10:39.334 "current_admin_qpairs": 0, 00:10:39.334 "current_io_qpairs": 0, 00:10:39.334 "pending_bdev_io": 0, 00:10:39.334 "completed_nvme_io": 0, 00:10:39.334 "transports": [ 00:10:39.334 { 00:10:39.334 "trtype": "RDMA", 00:10:39.334 "pending_data_buffer": 0, 00:10:39.334 "devices": [ 00:10:39.334 { 00:10:39.334 "name": "mlx5_0", 00:10:39.334 "polls": 5391, 00:10:39.334 "idle_polls": 5391, 00:10:39.334 "completions": 0, 00:10:39.334 "requests": 0, 00:10:39.334 "request_latency": 0, 00:10:39.334 "pending_free_request": 0, 00:10:39.334 "pending_rdma_read": 0, 00:10:39.334 "pending_rdma_write": 0, 00:10:39.334 "pending_rdma_send": 0, 00:10:39.334 "total_send_wrs": 0, 00:10:39.334 "send_doorbell_updates": 0, 00:10:39.334 "total_recv_wrs": 4096, 00:10:39.334 "recv_doorbell_updates": 1 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "mlx5_1", 00:10:39.334 "polls": 5391, 00:10:39.334 "idle_polls": 5391, 00:10:39.334 "completions": 0, 00:10:39.334 "requests": 0, 00:10:39.334 "request_latency": 0, 00:10:39.334 "pending_free_request": 0, 00:10:39.334 "pending_rdma_read": 0, 00:10:39.334 "pending_rdma_write": 0, 00:10:39.334 "pending_rdma_send": 0, 00:10:39.334 "total_send_wrs": 0, 00:10:39.334 "send_doorbell_updates": 0, 00:10:39.334 "total_recv_wrs": 4096, 00:10:39.334 "recv_doorbell_updates": 1 00:10:39.334 } 00:10:39.334 ] 00:10:39.334 } 00:10:39.334 ] 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "nvmf_tgt_poll_group_003", 00:10:39.334 "admin_qpairs": 0, 00:10:39.334 "io_qpairs": 0, 00:10:39.334 "current_admin_qpairs": 0, 00:10:39.334 "current_io_qpairs": 0, 00:10:39.334 "pending_bdev_io": 0, 00:10:39.334 "completed_nvme_io": 0, 00:10:39.334 "transports": [ 00:10:39.334 { 00:10:39.334 "trtype": "RDMA", 00:10:39.334 "pending_data_buffer": 0, 00:10:39.334 "devices": [ 00:10:39.334 { 00:10:39.334 "name": "mlx5_0", 00:10:39.334 "polls": 917, 00:10:39.334 "idle_polls": 917, 00:10:39.334 "completions": 0, 00:10:39.334 "requests": 0, 00:10:39.334 "request_latency": 0, 00:10:39.334 "pending_free_request": 0, 00:10:39.334 "pending_rdma_read": 0, 00:10:39.334 "pending_rdma_write": 0, 00:10:39.334 "pending_rdma_send": 0, 00:10:39.334 "total_send_wrs": 0, 00:10:39.334 "send_doorbell_updates": 0, 00:10:39.334 "total_recv_wrs": 4096, 00:10:39.334 "recv_doorbell_updates": 1 00:10:39.334 }, 00:10:39.334 { 00:10:39.334 "name": "mlx5_1", 00:10:39.334 "polls": 917, 00:10:39.334 "idle_polls": 917, 00:10:39.334 "completions": 0, 00:10:39.334 "requests": 0, 00:10:39.334 "request_latency": 0, 00:10:39.334 "pending_free_request": 0, 00:10:39.334 "pending_rdma_read": 0, 00:10:39.334 "pending_rdma_write": 0, 00:10:39.334 "pending_rdma_send": 0, 00:10:39.334 "total_send_wrs": 0, 00:10:39.334 "send_doorbell_updates": 0, 00:10:39.334 "total_recv_wrs": 4096, 00:10:39.334 "recv_doorbell_updates": 1 00:10:39.334 } 00:10:39.334 ] 00:10:39.334 } 00:10:39.334 ] 00:10:39.334 } 00:10:39.334 ] 00:10:39.334 }' 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:39.334 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.335 Malloc1 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.335 [2024-11-26 21:07:26.552159] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:10:39.335 [2024-11-26 21:07:26.602176] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:10:39.335 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:39.335 could not add new controller: failed to write to nvme-fabrics device 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.335 21:07:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:40.274 21:07:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:40.274 21:07:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:40.274 21:07:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.274 21:07:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:40.274 21:07:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:42.802 21:07:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:42.802 21:07:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:42.802 21:07:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.802 21:07:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:42.802 21:07:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.802 21:07:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:42.802 21:07:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:43.369 [2024-11-26 21:07:30.653580] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562' 00:10:43.369 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:43.369 could not add new controller: failed to write to nvme-fabrics device 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.369 21:07:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:44.307 21:07:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.307 21:07:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:44.307 21:07:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.307 21:07:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:44.307 21:07:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:46.841 21:07:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:46.841 21:07:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:46.841 21:07:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.841 21:07:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:46.841 21:07:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.841 21:07:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:46.841 21:07:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:47.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 [2024-11-26 21:07:34.665147] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.409 21:07:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:48.346 21:07:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:48.346 21:07:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:48.346 21:07:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:48.346 21:07:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:48.346 21:07:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:50.253 21:07:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:50.253 21:07:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:50.253 21:07:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:50.253 21:07:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:50.253 21:07:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:50.253 21:07:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:50.253 21:07:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.632 [2024-11-26 21:07:38.681963] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.632 21:07:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:52.570 21:07:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.570 21:07:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:52.570 21:07:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.570 21:07:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:52.570 21:07:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:54.477 21:07:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:54.477 21:07:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:54.477 21:07:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.477 21:07:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:54.477 21:07:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.477 21:07:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:54.477 21:07:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.415 [2024-11-26 21:07:42.675827] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.415 21:07:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:56.359 21:07:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.359 21:07:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:56.359 21:07:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.359 21:07:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:56.359 21:07:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:58.266 21:07:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:58.266 21:07:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:58.266 21:07:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.266 21:07:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:58.266 21:07:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.266 21:07:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:58.266 21:07:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.205 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.205 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:59.205 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:59.205 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.205 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:59.205 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.205 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:59.205 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.205 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.205 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.465 [2024-11-26 21:07:46.664733] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.465 21:07:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:00.403 21:07:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.403 21:07:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:00.403 21:07:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.403 21:07:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:00.403 21:07:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:02.309 21:07:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:02.309 21:07:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:02.309 21:07:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.309 21:07:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:02.309 21:07:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.309 21:07:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:02.309 21:07:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.245 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.246 [2024-11-26 21:07:50.662610] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.246 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.505 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.505 21:07:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:04.444 21:07:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:04.444 21:07:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:04.444 21:07:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:04.444 21:07:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:04.444 21:07:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:06.349 21:07:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:06.349 21:07:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:06.349 21:07:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:06.349 21:07:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:06.349 21:07:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:06.349 21:07:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:06.349 21:07:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.287 [2024-11-26 21:07:54.679154] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.287 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.548 [2024-11-26 21:07:54.727342] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.548 [2024-11-26 21:07:54.775511] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.548 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 [2024-11-26 21:07:54.823659] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 [2024-11-26 21:07:54.871854] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.549 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:07.549 "tick_rate": 2700000000, 00:11:07.549 "poll_groups": [ 00:11:07.549 { 00:11:07.549 "name": "nvmf_tgt_poll_group_000", 00:11:07.549 "admin_qpairs": 2, 00:11:07.549 "io_qpairs": 27, 00:11:07.549 "current_admin_qpairs": 0, 00:11:07.549 "current_io_qpairs": 0, 00:11:07.549 "pending_bdev_io": 0, 00:11:07.549 "completed_nvme_io": 128, 00:11:07.549 "transports": [ 00:11:07.549 { 00:11:07.549 "trtype": "RDMA", 00:11:07.549 "pending_data_buffer": 0, 00:11:07.549 "devices": [ 00:11:07.549 { 00:11:07.549 "name": "mlx5_0", 00:11:07.549 "polls": 3559010, 00:11:07.549 "idle_polls": 3558681, 00:11:07.549 "completions": 367, 00:11:07.549 "requests": 183, 00:11:07.550 "request_latency": 38517804, 00:11:07.550 "pending_free_request": 0, 00:11:07.550 "pending_rdma_read": 0, 00:11:07.550 "pending_rdma_write": 0, 00:11:07.550 "pending_rdma_send": 0, 00:11:07.550 "total_send_wrs": 309, 00:11:07.550 "send_doorbell_updates": 161, 00:11:07.550 "total_recv_wrs": 4279, 00:11:07.550 "recv_doorbell_updates": 161 00:11:07.550 }, 00:11:07.550 { 00:11:07.550 "name": "mlx5_1", 00:11:07.550 "polls": 3559010, 00:11:07.550 "idle_polls": 3559010, 00:11:07.550 "completions": 0, 00:11:07.550 "requests": 0, 00:11:07.550 "request_latency": 0, 00:11:07.550 "pending_free_request": 0, 00:11:07.550 "pending_rdma_read": 0, 00:11:07.550 "pending_rdma_write": 0, 00:11:07.550 "pending_rdma_send": 0, 00:11:07.550 "total_send_wrs": 0, 00:11:07.550 "send_doorbell_updates": 0, 00:11:07.550 "total_recv_wrs": 4096, 00:11:07.550 "recv_doorbell_updates": 1 00:11:07.550 } 00:11:07.550 ] 00:11:07.550 } 00:11:07.550 ] 00:11:07.550 }, 00:11:07.550 { 00:11:07.550 "name": "nvmf_tgt_poll_group_001", 00:11:07.550 "admin_qpairs": 2, 00:11:07.550 "io_qpairs": 26, 00:11:07.550 "current_admin_qpairs": 0, 00:11:07.550 "current_io_qpairs": 0, 00:11:07.550 "pending_bdev_io": 0, 00:11:07.550 "completed_nvme_io": 125, 00:11:07.550 "transports": [ 00:11:07.550 { 00:11:07.550 "trtype": "RDMA", 00:11:07.550 "pending_data_buffer": 0, 00:11:07.550 "devices": [ 00:11:07.550 { 00:11:07.550 "name": "mlx5_0", 00:11:07.550 "polls": 3524971, 00:11:07.550 "idle_polls": 3524648, 00:11:07.550 "completions": 362, 00:11:07.550 "requests": 181, 00:11:07.550 "request_latency": 37022076, 00:11:07.550 "pending_free_request": 0, 00:11:07.550 "pending_rdma_read": 0, 00:11:07.550 "pending_rdma_write": 0, 00:11:07.550 "pending_rdma_send": 0, 00:11:07.550 "total_send_wrs": 306, 00:11:07.550 "send_doorbell_updates": 157, 00:11:07.550 "total_recv_wrs": 4277, 00:11:07.550 "recv_doorbell_updates": 158 00:11:07.550 }, 00:11:07.550 { 00:11:07.550 "name": "mlx5_1", 00:11:07.550 "polls": 3524971, 00:11:07.550 "idle_polls": 3524971, 00:11:07.550 "completions": 0, 00:11:07.550 "requests": 0, 00:11:07.550 "request_latency": 0, 00:11:07.550 "pending_free_request": 0, 00:11:07.550 "pending_rdma_read": 0, 00:11:07.550 "pending_rdma_write": 0, 00:11:07.550 "pending_rdma_send": 0, 00:11:07.550 "total_send_wrs": 0, 00:11:07.550 "send_doorbell_updates": 0, 00:11:07.550 "total_recv_wrs": 4096, 00:11:07.550 "recv_doorbell_updates": 1 00:11:07.550 } 00:11:07.550 ] 00:11:07.550 } 00:11:07.550 ] 00:11:07.550 }, 00:11:07.550 { 00:11:07.550 "name": "nvmf_tgt_poll_group_002", 00:11:07.550 "admin_qpairs": 1, 00:11:07.550 "io_qpairs": 26, 00:11:07.550 "current_admin_qpairs": 0, 00:11:07.550 "current_io_qpairs": 0, 00:11:07.550 "pending_bdev_io": 0, 00:11:07.550 "completed_nvme_io": 91, 00:11:07.550 "transports": [ 00:11:07.550 { 00:11:07.550 "trtype": "RDMA", 00:11:07.550 "pending_data_buffer": 0, 00:11:07.550 "devices": [ 00:11:07.550 { 00:11:07.550 "name": "mlx5_0", 00:11:07.550 "polls": 3678353, 00:11:07.550 "idle_polls": 3678148, 00:11:07.550 "completions": 241, 00:11:07.550 "requests": 120, 00:11:07.550 "request_latency": 29178910, 00:11:07.550 "pending_free_request": 0, 00:11:07.550 "pending_rdma_read": 0, 00:11:07.550 "pending_rdma_write": 0, 00:11:07.550 "pending_rdma_send": 0, 00:11:07.550 "total_send_wrs": 199, 00:11:07.550 "send_doorbell_updates": 99, 00:11:07.550 "total_recv_wrs": 4216, 00:11:07.550 "recv_doorbell_updates": 99 00:11:07.550 }, 00:11:07.550 { 00:11:07.550 "name": "mlx5_1", 00:11:07.550 "polls": 3678353, 00:11:07.550 "idle_polls": 3678353, 00:11:07.550 "completions": 0, 00:11:07.550 "requests": 0, 00:11:07.550 "request_latency": 0, 00:11:07.550 "pending_free_request": 0, 00:11:07.550 "pending_rdma_read": 0, 00:11:07.550 "pending_rdma_write": 0, 00:11:07.550 "pending_rdma_send": 0, 00:11:07.550 "total_send_wrs": 0, 00:11:07.550 "send_doorbell_updates": 0, 00:11:07.550 "total_recv_wrs": 4096, 00:11:07.550 "recv_doorbell_updates": 1 00:11:07.550 } 00:11:07.550 ] 00:11:07.550 } 00:11:07.550 ] 00:11:07.550 }, 00:11:07.550 { 00:11:07.550 "name": "nvmf_tgt_poll_group_003", 00:11:07.550 "admin_qpairs": 2, 00:11:07.550 "io_qpairs": 26, 00:11:07.550 "current_admin_qpairs": 0, 00:11:07.550 "current_io_qpairs": 0, 00:11:07.550 "pending_bdev_io": 0, 00:11:07.550 "completed_nvme_io": 111, 00:11:07.550 "transports": [ 00:11:07.550 { 00:11:07.550 "trtype": "RDMA", 00:11:07.550 "pending_data_buffer": 0, 00:11:07.550 "devices": [ 00:11:07.550 { 00:11:07.550 "name": "mlx5_0", 00:11:07.550 "polls": 2840902, 00:11:07.550 "idle_polls": 2840603, 00:11:07.550 "completions": 328, 00:11:07.550 "requests": 164, 00:11:07.550 "request_latency": 30370288, 00:11:07.550 "pending_free_request": 0, 00:11:07.550 "pending_rdma_read": 0, 00:11:07.550 "pending_rdma_write": 0, 00:11:07.550 "pending_rdma_send": 0, 00:11:07.550 "total_send_wrs": 273, 00:11:07.550 "send_doorbell_updates": 148, 00:11:07.550 "total_recv_wrs": 4260, 00:11:07.550 "recv_doorbell_updates": 149 00:11:07.550 }, 00:11:07.550 { 00:11:07.550 "name": "mlx5_1", 00:11:07.550 "polls": 2840902, 00:11:07.550 "idle_polls": 2840902, 00:11:07.550 "completions": 0, 00:11:07.550 "requests": 0, 00:11:07.550 "request_latency": 0, 00:11:07.550 "pending_free_request": 0, 00:11:07.550 "pending_rdma_read": 0, 00:11:07.550 "pending_rdma_write": 0, 00:11:07.550 "pending_rdma_send": 0, 00:11:07.550 "total_send_wrs": 0, 00:11:07.550 "send_doorbell_updates": 0, 00:11:07.550 "total_recv_wrs": 4096, 00:11:07.550 "recv_doorbell_updates": 1 00:11:07.550 } 00:11:07.550 ] 00:11:07.550 } 00:11:07.550 ] 00:11:07.550 } 00:11:07.550 ] 00:11:07.550 }' 00:11:07.550 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:07.550 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:07.550 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:07.550 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.810 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:07.810 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:07.810 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:07.810 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:07.810 21:07:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1298 > 0 )) 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 135089078 > 0 )) 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:07.810 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:07.811 rmmod nvme_rdma 00:11:07.811 rmmod nvme_fabrics 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3825978 ']' 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3825978 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3825978 ']' 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3825978 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3825978 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3825978' 00:11:07.811 killing process with pid 3825978 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3825978 00:11:07.811 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3825978 00:11:08.069 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.069 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:08.069 00:11:08.069 real 0m35.498s 00:11:08.069 user 1m59.824s 00:11:08.069 sys 0m5.650s 00:11:08.069 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.069 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.069 ************************************ 00:11:08.069 END TEST nvmf_rpc 00:11:08.069 ************************************ 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.378 ************************************ 00:11:08.378 START TEST nvmf_invalid 00:11:08.378 ************************************ 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:08.378 * Looking for test storage... 00:11:08.378 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:08.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.378 --rc genhtml_branch_coverage=1 00:11:08.378 --rc genhtml_function_coverage=1 00:11:08.378 --rc genhtml_legend=1 00:11:08.378 --rc geninfo_all_blocks=1 00:11:08.378 --rc geninfo_unexecuted_blocks=1 00:11:08.378 00:11:08.378 ' 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:08.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.378 --rc genhtml_branch_coverage=1 00:11:08.378 --rc genhtml_function_coverage=1 00:11:08.378 --rc genhtml_legend=1 00:11:08.378 --rc geninfo_all_blocks=1 00:11:08.378 --rc geninfo_unexecuted_blocks=1 00:11:08.378 00:11:08.378 ' 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:08.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.378 --rc genhtml_branch_coverage=1 00:11:08.378 --rc genhtml_function_coverage=1 00:11:08.378 --rc genhtml_legend=1 00:11:08.378 --rc geninfo_all_blocks=1 00:11:08.378 --rc geninfo_unexecuted_blocks=1 00:11:08.378 00:11:08.378 ' 00:11:08.378 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:08.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.378 --rc genhtml_branch_coverage=1 00:11:08.378 --rc genhtml_function_coverage=1 00:11:08.378 --rc genhtml_legend=1 00:11:08.378 --rc geninfo_all_blocks=1 00:11:08.378 --rc geninfo_unexecuted_blocks=1 00:11:08.378 00:11:08.378 ' 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.379 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.379 21:07:55 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:13.730 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:13.730 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:13.730 Found net devices under 0000:18:00.0: mlx_0_0 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:13.730 Found net devices under 0000:18:00.1: mlx_0_1 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:11:13.730 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:13.990 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:13.991 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:13.991 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:13.991 altname enp24s0f0np0 00:11:13.991 altname ens785f0np0 00:11:13.991 inet 192.168.100.8/24 scope global mlx_0_0 00:11:13.991 valid_lft forever preferred_lft forever 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:13.991 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:13.991 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:13.991 altname enp24s0f1np1 00:11:13.991 altname ens785f1np1 00:11:13.991 inet 192.168.100.9/24 scope global mlx_0_1 00:11:13.991 valid_lft forever preferred_lft forever 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:13.991 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:13.992 192.168.100.9' 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:13.992 192.168.100.9' 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:13.992 192.168.100.9' 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3834926 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3834926 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3834926 ']' 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.992 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:13.992 [2024-11-26 21:08:01.407208] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:11:13.992 [2024-11-26 21:08:01.407271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.251 [2024-11-26 21:08:01.467326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.251 [2024-11-26 21:08:01.506989] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.251 [2024-11-26 21:08:01.507028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.251 [2024-11-26 21:08:01.507035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.251 [2024-11-26 21:08:01.507041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.251 [2024-11-26 21:08:01.507057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.251 [2024-11-26 21:08:01.508488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.251 [2024-11-26 21:08:01.508584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.251 [2024-11-26 21:08:01.508599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.251 [2024-11-26 21:08:01.508601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.251 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.251 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:14.251 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.251 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.251 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:14.251 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.252 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:14.252 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15622 00:11:14.511 [2024-11-26 21:08:01.799855] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:14.511 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:14.511 { 00:11:14.511 "nqn": "nqn.2016-06.io.spdk:cnode15622", 00:11:14.511 "tgt_name": "foobar", 00:11:14.511 "method": "nvmf_create_subsystem", 00:11:14.511 "req_id": 1 00:11:14.511 } 00:11:14.511 Got JSON-RPC error response 00:11:14.511 response: 00:11:14.511 { 00:11:14.511 "code": -32603, 00:11:14.511 "message": "Unable to find target foobar" 00:11:14.511 }' 00:11:14.511 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:14.511 { 00:11:14.511 "nqn": "nqn.2016-06.io.spdk:cnode15622", 00:11:14.511 "tgt_name": "foobar", 00:11:14.511 "method": "nvmf_create_subsystem", 00:11:14.511 "req_id": 1 00:11:14.511 } 00:11:14.511 Got JSON-RPC error response 00:11:14.511 response: 00:11:14.511 { 00:11:14.511 "code": -32603, 00:11:14.511 "message": "Unable to find target foobar" 00:11:14.511 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:14.511 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:14.511 21:08:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode30621 00:11:14.770 [2024-11-26 21:08:01.992498] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30621: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:14.770 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:14.770 { 00:11:14.770 "nqn": "nqn.2016-06.io.spdk:cnode30621", 00:11:14.770 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:14.770 "method": "nvmf_create_subsystem", 00:11:14.770 "req_id": 1 00:11:14.770 } 00:11:14.770 Got JSON-RPC error response 00:11:14.770 response: 00:11:14.770 { 00:11:14.770 "code": -32602, 00:11:14.770 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:14.770 }' 00:11:14.770 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:14.770 { 00:11:14.770 "nqn": "nqn.2016-06.io.spdk:cnode30621", 00:11:14.770 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:14.770 "method": "nvmf_create_subsystem", 00:11:14.770 "req_id": 1 00:11:14.770 } 00:11:14.770 Got JSON-RPC error response 00:11:14.770 response: 00:11:14.770 { 00:11:14.770 "code": -32602, 00:11:14.770 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:14.770 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:14.770 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:14.770 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32444 00:11:14.770 [2024-11-26 21:08:02.173068] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32444: invalid model number 'SPDK_Controller' 00:11:14.770 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:14.770 { 00:11:14.770 "nqn": "nqn.2016-06.io.spdk:cnode32444", 00:11:14.770 "model_number": "SPDK_Controller\u001f", 00:11:14.770 "method": "nvmf_create_subsystem", 00:11:14.770 "req_id": 1 00:11:14.770 } 00:11:14.770 Got JSON-RPC error response 00:11:14.770 response: 00:11:14.770 { 00:11:14.770 "code": -32602, 00:11:14.770 "message": "Invalid MN SPDK_Controller\u001f" 00:11:14.770 }' 00:11:14.770 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:14.770 { 00:11:14.770 "nqn": "nqn.2016-06.io.spdk:cnode32444", 00:11:14.770 "model_number": "SPDK_Controller\u001f", 00:11:14.770 "method": "nvmf_create_subsystem", 00:11:14.770 "req_id": 1 00:11:14.770 } 00:11:14.770 Got JSON-RPC error response 00:11:14.770 response: 00:11:14.770 { 00:11:14.770 "code": -32602, 00:11:14.770 "message": "Invalid MN SPDK_Controller\u001f" 00:11:14.770 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:15.031 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.032 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.032 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ B == \- ]] 00:11:15.032 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'B{NGZq5XfaQ^G xk(Y+H1' 00:11:15.032 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'B{NGZq5XfaQ^G xk(Y+H1' nqn.2016-06.io.spdk:cnode1959 00:11:15.293 [2024-11-26 21:08:02.506114] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1959: invalid serial number 'B{NGZq5XfaQ^G xk(Y+H1' 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:15.293 { 00:11:15.293 "nqn": "nqn.2016-06.io.spdk:cnode1959", 00:11:15.293 "serial_number": "B{NGZq5XfaQ^G xk(Y+H1", 00:11:15.293 "method": "nvmf_create_subsystem", 00:11:15.293 "req_id": 1 00:11:15.293 } 00:11:15.293 Got JSON-RPC error response 00:11:15.293 response: 00:11:15.293 { 00:11:15.293 "code": -32602, 00:11:15.293 "message": "Invalid SN B{NGZq5XfaQ^G xk(Y+H1" 00:11:15.293 }' 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:15.293 { 00:11:15.293 "nqn": "nqn.2016-06.io.spdk:cnode1959", 00:11:15.293 "serial_number": "B{NGZq5XfaQ^G xk(Y+H1", 00:11:15.293 "method": "nvmf_create_subsystem", 00:11:15.293 "req_id": 1 00:11:15.293 } 00:11:15.293 Got JSON-RPC error response 00:11:15.293 response: 00:11:15.293 { 00:11:15.293 "code": -32602, 00:11:15.293 "message": "Invalid SN B{NGZq5XfaQ^G xk(Y+H1" 00:11:15.293 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:15.293 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:15.294 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.295 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.295 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:15.295 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:15.295 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:15.295 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.295 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=i8H$W/&Ih+n:fhuMe4Ll";|\:K\U~/\%B$8"hNRe' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '=i8H$W/&Ih+n:fhuMe4Ll";|\:K\U~/\%B$8"hNRe' nqn.2016-06.io.spdk:cnode1886 00:11:15.555 [2024-11-26 21:08:02.955566] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1886: invalid model number '=i8H$W/&Ih+n:fhuMe4Ll";|\:K\U~/\%B$8"hNRe' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:15.555 { 00:11:15.555 "nqn": "nqn.2016-06.io.spdk:cnode1886", 00:11:15.555 "model_number": "=i8H$W/&Ih+n:fhuMe4Ll\";|\\:K\\U~/\\%B$8\"hNRe", 00:11:15.555 "method": "nvmf_create_subsystem", 00:11:15.555 "req_id": 1 00:11:15.555 } 00:11:15.555 Got JSON-RPC error response 00:11:15.555 response: 00:11:15.555 { 00:11:15.555 "code": -32602, 00:11:15.555 "message": "Invalid MN =i8H$W/&Ih+n:fhuMe4Ll\";|\\:K\\U~/\\%B$8\"hNRe" 00:11:15.555 }' 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:15.555 { 00:11:15.555 "nqn": "nqn.2016-06.io.spdk:cnode1886", 00:11:15.555 "model_number": "=i8H$W/&Ih+n:fhuMe4Ll\";|\\:K\\U~/\\%B$8\"hNRe", 00:11:15.555 "method": "nvmf_create_subsystem", 00:11:15.555 "req_id": 1 00:11:15.555 } 00:11:15.555 Got JSON-RPC error response 00:11:15.555 response: 00:11:15.555 { 00:11:15.555 "code": -32602, 00:11:15.555 "message": "Invalid MN =i8H$W/&Ih+n:fhuMe4Ll\";|\\:K\\U~/\\%B$8\"hNRe" 00:11:15.555 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:15.555 21:08:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:11:15.814 [2024-11-26 21:08:03.160043] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbdae50/0xbdf340) succeed. 00:11:15.814 [2024-11-26 21:08:03.168153] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbdc4e0/0xc209e0) succeed. 00:11:16.073 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:16.073 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:11:16.073 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:11:16.073 192.168.100.9' 00:11:16.073 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:16.332 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:11:16.332 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:11:16.332 [2024-11-26 21:08:03.665109] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:16.332 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:16.332 { 00:11:16.332 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:16.332 "listen_address": { 00:11:16.332 "trtype": "rdma", 00:11:16.332 "traddr": "192.168.100.8", 00:11:16.332 "trsvcid": "4421" 00:11:16.332 }, 00:11:16.332 "method": "nvmf_subsystem_remove_listener", 00:11:16.332 "req_id": 1 00:11:16.332 } 00:11:16.332 Got JSON-RPC error response 00:11:16.332 response: 00:11:16.332 { 00:11:16.332 "code": -32602, 00:11:16.332 "message": "Invalid parameters" 00:11:16.332 }' 00:11:16.332 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:16.332 { 00:11:16.332 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:16.332 "listen_address": { 00:11:16.332 "trtype": "rdma", 00:11:16.332 "traddr": "192.168.100.8", 00:11:16.332 "trsvcid": "4421" 00:11:16.332 }, 00:11:16.332 "method": "nvmf_subsystem_remove_listener", 00:11:16.332 "req_id": 1 00:11:16.332 } 00:11:16.332 Got JSON-RPC error response 00:11:16.332 response: 00:11:16.332 { 00:11:16.332 "code": -32602, 00:11:16.332 "message": "Invalid parameters" 00:11:16.332 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:16.332 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25546 -i 0 00:11:16.590 [2024-11-26 21:08:03.857750] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25546: invalid cntlid range [0-65519] 00:11:16.590 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:16.590 { 00:11:16.590 "nqn": "nqn.2016-06.io.spdk:cnode25546", 00:11:16.590 "min_cntlid": 0, 00:11:16.590 "method": "nvmf_create_subsystem", 00:11:16.590 "req_id": 1 00:11:16.590 } 00:11:16.590 Got JSON-RPC error response 00:11:16.590 response: 00:11:16.590 { 00:11:16.590 "code": -32602, 00:11:16.590 "message": "Invalid cntlid range [0-65519]" 00:11:16.590 }' 00:11:16.591 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:16.591 { 00:11:16.591 "nqn": "nqn.2016-06.io.spdk:cnode25546", 00:11:16.591 "min_cntlid": 0, 00:11:16.591 "method": "nvmf_create_subsystem", 00:11:16.591 "req_id": 1 00:11:16.591 } 00:11:16.591 Got JSON-RPC error response 00:11:16.591 response: 00:11:16.591 { 00:11:16.591 "code": -32602, 00:11:16.591 "message": "Invalid cntlid range [0-65519]" 00:11:16.591 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:16.591 21:08:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15958 -i 65520 00:11:16.849 [2024-11-26 21:08:04.050440] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15958: invalid cntlid range [65520-65519] 00:11:16.849 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:16.849 { 00:11:16.849 "nqn": "nqn.2016-06.io.spdk:cnode15958", 00:11:16.849 "min_cntlid": 65520, 00:11:16.849 "method": "nvmf_create_subsystem", 00:11:16.849 "req_id": 1 00:11:16.849 } 00:11:16.849 Got JSON-RPC error response 00:11:16.849 response: 00:11:16.849 { 00:11:16.849 "code": -32602, 00:11:16.849 "message": "Invalid cntlid range [65520-65519]" 00:11:16.849 }' 00:11:16.849 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:16.849 { 00:11:16.849 "nqn": "nqn.2016-06.io.spdk:cnode15958", 00:11:16.849 "min_cntlid": 65520, 00:11:16.849 "method": "nvmf_create_subsystem", 00:11:16.849 "req_id": 1 00:11:16.849 } 00:11:16.849 Got JSON-RPC error response 00:11:16.849 response: 00:11:16.849 { 00:11:16.849 "code": -32602, 00:11:16.849 "message": "Invalid cntlid range [65520-65519]" 00:11:16.849 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:16.849 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode33 -I 0 00:11:16.849 [2024-11-26 21:08:04.239066] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode33: invalid cntlid range [1-0] 00:11:16.849 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:16.849 { 00:11:16.849 "nqn": "nqn.2016-06.io.spdk:cnode33", 00:11:16.849 "max_cntlid": 0, 00:11:16.849 "method": "nvmf_create_subsystem", 00:11:16.849 "req_id": 1 00:11:16.849 } 00:11:16.849 Got JSON-RPC error response 00:11:16.849 response: 00:11:16.849 { 00:11:16.849 "code": -32602, 00:11:16.849 "message": "Invalid cntlid range [1-0]" 00:11:16.849 }' 00:11:16.849 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:16.849 { 00:11:16.849 "nqn": "nqn.2016-06.io.spdk:cnode33", 00:11:16.849 "max_cntlid": 0, 00:11:16.849 "method": "nvmf_create_subsystem", 00:11:16.849 "req_id": 1 00:11:16.849 } 00:11:16.849 Got JSON-RPC error response 00:11:16.849 response: 00:11:16.849 { 00:11:16.849 "code": -32602, 00:11:16.849 "message": "Invalid cntlid range [1-0]" 00:11:16.849 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:16.849 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19419 -I 65520 00:11:17.109 [2024-11-26 21:08:04.427711] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19419: invalid cntlid range [1-65520] 00:11:17.109 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:17.109 { 00:11:17.109 "nqn": "nqn.2016-06.io.spdk:cnode19419", 00:11:17.109 "max_cntlid": 65520, 00:11:17.109 "method": "nvmf_create_subsystem", 00:11:17.109 "req_id": 1 00:11:17.109 } 00:11:17.109 Got JSON-RPC error response 00:11:17.109 response: 00:11:17.109 { 00:11:17.109 "code": -32602, 00:11:17.109 "message": "Invalid cntlid range [1-65520]" 00:11:17.109 }' 00:11:17.109 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:17.109 { 00:11:17.109 "nqn": "nqn.2016-06.io.spdk:cnode19419", 00:11:17.109 "max_cntlid": 65520, 00:11:17.109 "method": "nvmf_create_subsystem", 00:11:17.109 "req_id": 1 00:11:17.109 } 00:11:17.109 Got JSON-RPC error response 00:11:17.109 response: 00:11:17.109 { 00:11:17.109 "code": -32602, 00:11:17.109 "message": "Invalid cntlid range [1-65520]" 00:11:17.109 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:17.109 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21922 -i 6 -I 5 00:11:17.368 [2024-11-26 21:08:04.604334] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21922: invalid cntlid range [6-5] 00:11:17.368 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:17.368 { 00:11:17.368 "nqn": "nqn.2016-06.io.spdk:cnode21922", 00:11:17.368 "min_cntlid": 6, 00:11:17.368 "max_cntlid": 5, 00:11:17.369 "method": "nvmf_create_subsystem", 00:11:17.369 "req_id": 1 00:11:17.369 } 00:11:17.369 Got JSON-RPC error response 00:11:17.369 response: 00:11:17.369 { 00:11:17.369 "code": -32602, 00:11:17.369 "message": "Invalid cntlid range [6-5]" 00:11:17.369 }' 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:17.369 { 00:11:17.369 "nqn": "nqn.2016-06.io.spdk:cnode21922", 00:11:17.369 "min_cntlid": 6, 00:11:17.369 "max_cntlid": 5, 00:11:17.369 "method": "nvmf_create_subsystem", 00:11:17.369 "req_id": 1 00:11:17.369 } 00:11:17.369 Got JSON-RPC error response 00:11:17.369 response: 00:11:17.369 { 00:11:17.369 "code": -32602, 00:11:17.369 "message": "Invalid cntlid range [6-5]" 00:11:17.369 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:17.369 { 00:11:17.369 "name": "foobar", 00:11:17.369 "method": "nvmf_delete_target", 00:11:17.369 "req_id": 1 00:11:17.369 } 00:11:17.369 Got JSON-RPC error response 00:11:17.369 response: 00:11:17.369 { 00:11:17.369 "code": -32602, 00:11:17.369 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:17.369 }' 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:17.369 { 00:11:17.369 "name": "foobar", 00:11:17.369 "method": "nvmf_delete_target", 00:11:17.369 "req_id": 1 00:11:17.369 } 00:11:17.369 Got JSON-RPC error response 00:11:17.369 response: 00:11:17.369 { 00:11:17.369 "code": -32602, 00:11:17.369 "message": "The specified target doesn't exist, cannot delete it." 00:11:17.369 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:17.369 rmmod nvme_rdma 00:11:17.369 rmmod nvme_fabrics 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3834926 ']' 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3834926 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3834926 ']' 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3834926 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.369 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3834926 00:11:17.627 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:17.627 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:17.627 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3834926' 00:11:17.627 killing process with pid 3834926 00:11:17.627 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3834926 00:11:17.627 21:08:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3834926 00:11:17.627 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:17.627 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:17.627 00:11:17.627 real 0m9.524s 00:11:17.627 user 0m18.046s 00:11:17.627 sys 0m5.147s 00:11:17.627 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.627 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:17.627 ************************************ 00:11:17.627 END TEST nvmf_invalid 00:11:17.627 ************************************ 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:17.887 ************************************ 00:11:17.887 START TEST nvmf_connect_stress 00:11:17.887 ************************************ 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:11:17.887 * Looking for test storage... 00:11:17.887 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:17.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.887 --rc genhtml_branch_coverage=1 00:11:17.887 --rc genhtml_function_coverage=1 00:11:17.887 --rc genhtml_legend=1 00:11:17.887 --rc geninfo_all_blocks=1 00:11:17.887 --rc geninfo_unexecuted_blocks=1 00:11:17.887 00:11:17.887 ' 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:17.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.887 --rc genhtml_branch_coverage=1 00:11:17.887 --rc genhtml_function_coverage=1 00:11:17.887 --rc genhtml_legend=1 00:11:17.887 --rc geninfo_all_blocks=1 00:11:17.887 --rc geninfo_unexecuted_blocks=1 00:11:17.887 00:11:17.887 ' 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:17.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.887 --rc genhtml_branch_coverage=1 00:11:17.887 --rc genhtml_function_coverage=1 00:11:17.887 --rc genhtml_legend=1 00:11:17.887 --rc geninfo_all_blocks=1 00:11:17.887 --rc geninfo_unexecuted_blocks=1 00:11:17.887 00:11:17.887 ' 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:17.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:17.887 --rc genhtml_branch_coverage=1 00:11:17.887 --rc genhtml_function_coverage=1 00:11:17.887 --rc genhtml_legend=1 00:11:17.887 --rc geninfo_all_blocks=1 00:11:17.887 --rc geninfo_unexecuted_blocks=1 00:11:17.887 00:11:17.887 ' 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.887 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:17.888 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:17.888 21:08:05 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:24.462 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:24.462 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:24.462 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:24.463 Found net devices under 0000:18:00.0: mlx_0_0 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:24.463 Found net devices under 0000:18:00.1: mlx_0_1 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:24.463 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:24.463 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:24.463 altname enp24s0f0np0 00:11:24.463 altname ens785f0np0 00:11:24.463 inet 192.168.100.8/24 scope global mlx_0_0 00:11:24.463 valid_lft forever preferred_lft forever 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:24.463 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:24.463 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:24.463 altname enp24s0f1np1 00:11:24.463 altname ens785f1np1 00:11:24.463 inet 192.168.100.9/24 scope global mlx_0_1 00:11:24.463 valid_lft forever preferred_lft forever 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:24.463 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:24.464 192.168.100.9' 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:24.464 192.168.100.9' 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:24.464 192.168.100.9' 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3838895 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3838895 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3838895 ']' 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.464 21:08:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.464 [2024-11-26 21:08:10.995839] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:11:24.464 [2024-11-26 21:08:10.995893] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.464 [2024-11-26 21:08:11.056978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:24.464 [2024-11-26 21:08:11.094655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.464 [2024-11-26 21:08:11.094690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.464 [2024-11-26 21:08:11.094697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.464 [2024-11-26 21:08:11.094702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.464 [2024-11-26 21:08:11.094707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.464 [2024-11-26 21:08:11.095917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:24.464 [2024-11-26 21:08:11.095999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:24.464 [2024-11-26 21:08:11.096000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.464 [2024-11-26 21:08:11.255999] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1285cf0/0x128a1e0) succeed. 00:11:24.464 [2024-11-26 21:08:11.264130] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12872e0/0x12cb880) succeed. 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.464 [2024-11-26 21:08:11.371205] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.464 NULL1 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3839154 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.464 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.465 21:08:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.724 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.724 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:24.724 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.724 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.724 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.293 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.293 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:25.293 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.293 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.293 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.551 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.551 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:25.551 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.551 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.551 21:08:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.811 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.811 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:25.811 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.811 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.811 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.070 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.070 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:26.070 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.070 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.070 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.329 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.329 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:26.329 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.329 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.329 21:08:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.897 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.897 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:26.897 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.897 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.897 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.155 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.155 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:27.155 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.155 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.155 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.414 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.414 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:27.414 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.414 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.414 21:08:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.676 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.676 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:27.676 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.676 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.676 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.934 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.934 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:27.934 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.934 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.934 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.501 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.501 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:28.501 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.501 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.501 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.759 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.759 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:28.759 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.759 21:08:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.759 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.018 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.018 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:29.018 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.018 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.018 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.276 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.276 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:29.276 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.276 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.276 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.844 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.844 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:29.844 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.844 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.844 21:08:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.104 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.104 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:30.104 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.104 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.104 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.363 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.363 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:30.363 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.363 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.363 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.622 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.622 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:30.622 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.622 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.622 21:08:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.881 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.881 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:30.881 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.881 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.881 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.450 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.450 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:31.450 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.450 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.450 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.710 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.710 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:31.710 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.710 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.710 21:08:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.968 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.968 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:31.968 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.968 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.968 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.227 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.227 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:32.227 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.227 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.227 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.486 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.486 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:32.486 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.486 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.486 21:08:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.054 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.054 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:33.054 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.054 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.054 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.314 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.314 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:33.314 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.314 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.314 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.573 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.573 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:33.573 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.573 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.573 21:08:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.832 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.833 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:33.833 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.833 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.833 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.091 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.091 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:34.091 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.092 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.092 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.351 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3839154 00:11:34.610 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3839154) - No such process 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3839154 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:34.610 rmmod nvme_rdma 00:11:34.610 rmmod nvme_fabrics 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3838895 ']' 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3838895 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3838895 ']' 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3838895 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3838895 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3838895' 00:11:34.610 killing process with pid 3838895 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3838895 00:11:34.610 21:08:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3838895 00:11:34.869 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.869 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:34.869 00:11:34.869 real 0m17.033s 00:11:34.869 user 0m40.795s 00:11:34.869 sys 0m6.219s 00:11:34.869 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.869 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.869 ************************************ 00:11:34.869 END TEST nvmf_connect_stress 00:11:34.869 ************************************ 00:11:34.869 21:08:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:34.869 21:08:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.869 21:08:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.869 21:08:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.869 ************************************ 00:11:34.869 START TEST nvmf_fused_ordering 00:11:34.869 ************************************ 00:11:34.869 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:11:35.130 * Looking for test storage... 00:11:35.130 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:35.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.130 --rc genhtml_branch_coverage=1 00:11:35.130 --rc genhtml_function_coverage=1 00:11:35.130 --rc genhtml_legend=1 00:11:35.130 --rc geninfo_all_blocks=1 00:11:35.130 --rc geninfo_unexecuted_blocks=1 00:11:35.130 00:11:35.130 ' 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:35.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.130 --rc genhtml_branch_coverage=1 00:11:35.130 --rc genhtml_function_coverage=1 00:11:35.130 --rc genhtml_legend=1 00:11:35.130 --rc geninfo_all_blocks=1 00:11:35.130 --rc geninfo_unexecuted_blocks=1 00:11:35.130 00:11:35.130 ' 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:35.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.130 --rc genhtml_branch_coverage=1 00:11:35.130 --rc genhtml_function_coverage=1 00:11:35.130 --rc genhtml_legend=1 00:11:35.130 --rc geninfo_all_blocks=1 00:11:35.130 --rc geninfo_unexecuted_blocks=1 00:11:35.130 00:11:35.130 ' 00:11:35.130 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:35.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.130 --rc genhtml_branch_coverage=1 00:11:35.130 --rc genhtml_function_coverage=1 00:11:35.130 --rc genhtml_legend=1 00:11:35.130 --rc geninfo_all_blocks=1 00:11:35.130 --rc geninfo_unexecuted_blocks=1 00:11:35.131 00:11:35.131 ' 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.131 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.131 21:08:22 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.409 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:40.410 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:40.410 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:40.410 Found net devices under 0000:18:00.0: mlx_0_0 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:40.410 Found net devices under 0000:18:00.1: mlx_0_1 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:40.410 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:40.410 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:40.410 altname enp24s0f0np0 00:11:40.410 altname ens785f0np0 00:11:40.410 inet 192.168.100.8/24 scope global mlx_0_0 00:11:40.410 valid_lft forever preferred_lft forever 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:40.410 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:40.411 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:40.411 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:40.411 altname enp24s0f1np1 00:11:40.411 altname ens785f1np1 00:11:40.411 inet 192.168.100.9/24 scope global mlx_0_1 00:11:40.411 valid_lft forever preferred_lft forever 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:40.411 192.168.100.9' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:40.411 192.168.100.9' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:40.411 192.168.100.9' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3844265 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3844265 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3844265 ']' 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.411 [2024-11-26 21:08:27.638025] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:11:40.411 [2024-11-26 21:08:27.638075] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.411 [2024-11-26 21:08:27.697901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.411 [2024-11-26 21:08:27.733797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.411 [2024-11-26 21:08:27.733831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.411 [2024-11-26 21:08:27.733837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.411 [2024-11-26 21:08:27.733843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.411 [2024-11-26 21:08:27.733847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.411 [2024-11-26 21:08:27.734342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.411 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.670 [2024-11-26 21:08:27.877271] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xefd610/0xf01b00) succeed. 00:11:40.670 [2024-11-26 21:08:27.886062] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xefeac0/0xf431a0) succeed. 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.670 [2024-11-26 21:08:27.925552] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.670 NULL1 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.670 21:08:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:40.670 [2024-11-26 21:08:27.977172] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:11:40.670 [2024-11-26 21:08:27.977204] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3844298 ] 00:11:40.930 Attached to nqn.2016-06.io.spdk:cnode1 00:11:40.930 Namespace ID: 1 size: 1GB 00:11:40.930 fused_ordering(0) 00:11:40.930 fused_ordering(1) 00:11:40.930 fused_ordering(2) 00:11:40.930 fused_ordering(3) 00:11:40.930 fused_ordering(4) 00:11:40.930 fused_ordering(5) 00:11:40.930 fused_ordering(6) 00:11:40.930 fused_ordering(7) 00:11:40.930 fused_ordering(8) 00:11:40.930 fused_ordering(9) 00:11:40.930 fused_ordering(10) 00:11:40.930 fused_ordering(11) 00:11:40.930 fused_ordering(12) 00:11:40.930 fused_ordering(13) 00:11:40.930 fused_ordering(14) 00:11:40.930 fused_ordering(15) 00:11:40.930 fused_ordering(16) 00:11:40.930 fused_ordering(17) 00:11:40.930 fused_ordering(18) 00:11:40.930 fused_ordering(19) 00:11:40.930 fused_ordering(20) 00:11:40.930 fused_ordering(21) 00:11:40.930 fused_ordering(22) 00:11:40.930 fused_ordering(23) 00:11:40.930 fused_ordering(24) 00:11:40.930 fused_ordering(25) 00:11:40.930 fused_ordering(26) 00:11:40.930 fused_ordering(27) 00:11:40.930 fused_ordering(28) 00:11:40.930 fused_ordering(29) 00:11:40.930 fused_ordering(30) 00:11:40.930 fused_ordering(31) 00:11:40.930 fused_ordering(32) 00:11:40.930 fused_ordering(33) 00:11:40.930 fused_ordering(34) 00:11:40.930 fused_ordering(35) 00:11:40.930 fused_ordering(36) 00:11:40.930 fused_ordering(37) 00:11:40.930 fused_ordering(38) 00:11:40.930 fused_ordering(39) 00:11:40.930 fused_ordering(40) 00:11:40.930 fused_ordering(41) 00:11:40.930 fused_ordering(42) 00:11:40.930 fused_ordering(43) 00:11:40.930 fused_ordering(44) 00:11:40.930 fused_ordering(45) 00:11:40.930 fused_ordering(46) 00:11:40.930 fused_ordering(47) 00:11:40.930 fused_ordering(48) 00:11:40.930 fused_ordering(49) 00:11:40.930 fused_ordering(50) 00:11:40.930 fused_ordering(51) 00:11:40.930 fused_ordering(52) 00:11:40.930 fused_ordering(53) 00:11:40.930 fused_ordering(54) 00:11:40.930 fused_ordering(55) 00:11:40.930 fused_ordering(56) 00:11:40.930 fused_ordering(57) 00:11:40.930 fused_ordering(58) 00:11:40.930 fused_ordering(59) 00:11:40.930 fused_ordering(60) 00:11:40.930 fused_ordering(61) 00:11:40.930 fused_ordering(62) 00:11:40.930 fused_ordering(63) 00:11:40.930 fused_ordering(64) 00:11:40.930 fused_ordering(65) 00:11:40.930 fused_ordering(66) 00:11:40.930 fused_ordering(67) 00:11:40.930 fused_ordering(68) 00:11:40.930 fused_ordering(69) 00:11:40.930 fused_ordering(70) 00:11:40.930 fused_ordering(71) 00:11:40.930 fused_ordering(72) 00:11:40.930 fused_ordering(73) 00:11:40.930 fused_ordering(74) 00:11:40.930 fused_ordering(75) 00:11:40.931 fused_ordering(76) 00:11:40.931 fused_ordering(77) 00:11:40.931 fused_ordering(78) 00:11:40.931 fused_ordering(79) 00:11:40.931 fused_ordering(80) 00:11:40.931 fused_ordering(81) 00:11:40.931 fused_ordering(82) 00:11:40.931 fused_ordering(83) 00:11:40.931 fused_ordering(84) 00:11:40.931 fused_ordering(85) 00:11:40.931 fused_ordering(86) 00:11:40.931 fused_ordering(87) 00:11:40.931 fused_ordering(88) 00:11:40.931 fused_ordering(89) 00:11:40.931 fused_ordering(90) 00:11:40.931 fused_ordering(91) 00:11:40.931 fused_ordering(92) 00:11:40.931 fused_ordering(93) 00:11:40.931 fused_ordering(94) 00:11:40.931 fused_ordering(95) 00:11:40.931 fused_ordering(96) 00:11:40.931 fused_ordering(97) 00:11:40.931 fused_ordering(98) 00:11:40.931 fused_ordering(99) 00:11:40.931 fused_ordering(100) 00:11:40.931 fused_ordering(101) 00:11:40.931 fused_ordering(102) 00:11:40.931 fused_ordering(103) 00:11:40.931 fused_ordering(104) 00:11:40.931 fused_ordering(105) 00:11:40.931 fused_ordering(106) 00:11:40.931 fused_ordering(107) 00:11:40.931 fused_ordering(108) 00:11:40.931 fused_ordering(109) 00:11:40.931 fused_ordering(110) 00:11:40.931 fused_ordering(111) 00:11:40.931 fused_ordering(112) 00:11:40.931 fused_ordering(113) 00:11:40.931 fused_ordering(114) 00:11:40.931 fused_ordering(115) 00:11:40.931 fused_ordering(116) 00:11:40.931 fused_ordering(117) 00:11:40.931 fused_ordering(118) 00:11:40.931 fused_ordering(119) 00:11:40.931 fused_ordering(120) 00:11:40.931 fused_ordering(121) 00:11:40.931 fused_ordering(122) 00:11:40.931 fused_ordering(123) 00:11:40.931 fused_ordering(124) 00:11:40.931 fused_ordering(125) 00:11:40.931 fused_ordering(126) 00:11:40.931 fused_ordering(127) 00:11:40.931 fused_ordering(128) 00:11:40.931 fused_ordering(129) 00:11:40.931 fused_ordering(130) 00:11:40.931 fused_ordering(131) 00:11:40.931 fused_ordering(132) 00:11:40.931 fused_ordering(133) 00:11:40.931 fused_ordering(134) 00:11:40.931 fused_ordering(135) 00:11:40.931 fused_ordering(136) 00:11:40.931 fused_ordering(137) 00:11:40.931 fused_ordering(138) 00:11:40.931 fused_ordering(139) 00:11:40.931 fused_ordering(140) 00:11:40.931 fused_ordering(141) 00:11:40.931 fused_ordering(142) 00:11:40.931 fused_ordering(143) 00:11:40.931 fused_ordering(144) 00:11:40.931 fused_ordering(145) 00:11:40.931 fused_ordering(146) 00:11:40.931 fused_ordering(147) 00:11:40.931 fused_ordering(148) 00:11:40.931 fused_ordering(149) 00:11:40.931 fused_ordering(150) 00:11:40.931 fused_ordering(151) 00:11:40.931 fused_ordering(152) 00:11:40.931 fused_ordering(153) 00:11:40.931 fused_ordering(154) 00:11:40.931 fused_ordering(155) 00:11:40.931 fused_ordering(156) 00:11:40.931 fused_ordering(157) 00:11:40.931 fused_ordering(158) 00:11:40.931 fused_ordering(159) 00:11:40.931 fused_ordering(160) 00:11:40.931 fused_ordering(161) 00:11:40.931 fused_ordering(162) 00:11:40.931 fused_ordering(163) 00:11:40.931 fused_ordering(164) 00:11:40.931 fused_ordering(165) 00:11:40.931 fused_ordering(166) 00:11:40.931 fused_ordering(167) 00:11:40.931 fused_ordering(168) 00:11:40.931 fused_ordering(169) 00:11:40.931 fused_ordering(170) 00:11:40.931 fused_ordering(171) 00:11:40.931 fused_ordering(172) 00:11:40.931 fused_ordering(173) 00:11:40.931 fused_ordering(174) 00:11:40.931 fused_ordering(175) 00:11:40.931 fused_ordering(176) 00:11:40.931 fused_ordering(177) 00:11:40.931 fused_ordering(178) 00:11:40.931 fused_ordering(179) 00:11:40.931 fused_ordering(180) 00:11:40.931 fused_ordering(181) 00:11:40.931 fused_ordering(182) 00:11:40.931 fused_ordering(183) 00:11:40.931 fused_ordering(184) 00:11:40.931 fused_ordering(185) 00:11:40.931 fused_ordering(186) 00:11:40.931 fused_ordering(187) 00:11:40.931 fused_ordering(188) 00:11:40.931 fused_ordering(189) 00:11:40.931 fused_ordering(190) 00:11:40.931 fused_ordering(191) 00:11:40.931 fused_ordering(192) 00:11:40.931 fused_ordering(193) 00:11:40.931 fused_ordering(194) 00:11:40.931 fused_ordering(195) 00:11:40.931 fused_ordering(196) 00:11:40.931 fused_ordering(197) 00:11:40.931 fused_ordering(198) 00:11:40.931 fused_ordering(199) 00:11:40.931 fused_ordering(200) 00:11:40.931 fused_ordering(201) 00:11:40.931 fused_ordering(202) 00:11:40.931 fused_ordering(203) 00:11:40.931 fused_ordering(204) 00:11:40.931 fused_ordering(205) 00:11:40.931 fused_ordering(206) 00:11:40.931 fused_ordering(207) 00:11:40.931 fused_ordering(208) 00:11:40.931 fused_ordering(209) 00:11:40.931 fused_ordering(210) 00:11:40.931 fused_ordering(211) 00:11:40.931 fused_ordering(212) 00:11:40.931 fused_ordering(213) 00:11:40.931 fused_ordering(214) 00:11:40.931 fused_ordering(215) 00:11:40.931 fused_ordering(216) 00:11:40.931 fused_ordering(217) 00:11:40.931 fused_ordering(218) 00:11:40.931 fused_ordering(219) 00:11:40.931 fused_ordering(220) 00:11:40.931 fused_ordering(221) 00:11:40.931 fused_ordering(222) 00:11:40.931 fused_ordering(223) 00:11:40.931 fused_ordering(224) 00:11:40.931 fused_ordering(225) 00:11:40.931 fused_ordering(226) 00:11:40.931 fused_ordering(227) 00:11:40.931 fused_ordering(228) 00:11:40.931 fused_ordering(229) 00:11:40.931 fused_ordering(230) 00:11:40.931 fused_ordering(231) 00:11:40.931 fused_ordering(232) 00:11:40.931 fused_ordering(233) 00:11:40.931 fused_ordering(234) 00:11:40.931 fused_ordering(235) 00:11:40.931 fused_ordering(236) 00:11:40.931 fused_ordering(237) 00:11:40.931 fused_ordering(238) 00:11:40.931 fused_ordering(239) 00:11:40.931 fused_ordering(240) 00:11:40.931 fused_ordering(241) 00:11:40.931 fused_ordering(242) 00:11:40.931 fused_ordering(243) 00:11:40.931 fused_ordering(244) 00:11:40.931 fused_ordering(245) 00:11:40.931 fused_ordering(246) 00:11:40.931 fused_ordering(247) 00:11:40.931 fused_ordering(248) 00:11:40.931 fused_ordering(249) 00:11:40.931 fused_ordering(250) 00:11:40.931 fused_ordering(251) 00:11:40.931 fused_ordering(252) 00:11:40.931 fused_ordering(253) 00:11:40.931 fused_ordering(254) 00:11:40.931 fused_ordering(255) 00:11:40.931 fused_ordering(256) 00:11:40.931 fused_ordering(257) 00:11:40.931 fused_ordering(258) 00:11:40.931 fused_ordering(259) 00:11:40.931 fused_ordering(260) 00:11:40.931 fused_ordering(261) 00:11:40.931 fused_ordering(262) 00:11:40.931 fused_ordering(263) 00:11:40.931 fused_ordering(264) 00:11:40.931 fused_ordering(265) 00:11:40.931 fused_ordering(266) 00:11:40.931 fused_ordering(267) 00:11:40.931 fused_ordering(268) 00:11:40.931 fused_ordering(269) 00:11:40.931 fused_ordering(270) 00:11:40.931 fused_ordering(271) 00:11:40.931 fused_ordering(272) 00:11:40.931 fused_ordering(273) 00:11:40.931 fused_ordering(274) 00:11:40.931 fused_ordering(275) 00:11:40.931 fused_ordering(276) 00:11:40.931 fused_ordering(277) 00:11:40.931 fused_ordering(278) 00:11:40.931 fused_ordering(279) 00:11:40.931 fused_ordering(280) 00:11:40.931 fused_ordering(281) 00:11:40.931 fused_ordering(282) 00:11:40.931 fused_ordering(283) 00:11:40.931 fused_ordering(284) 00:11:40.931 fused_ordering(285) 00:11:40.931 fused_ordering(286) 00:11:40.931 fused_ordering(287) 00:11:40.931 fused_ordering(288) 00:11:40.931 fused_ordering(289) 00:11:40.931 fused_ordering(290) 00:11:40.931 fused_ordering(291) 00:11:40.931 fused_ordering(292) 00:11:40.931 fused_ordering(293) 00:11:40.931 fused_ordering(294) 00:11:40.931 fused_ordering(295) 00:11:40.931 fused_ordering(296) 00:11:40.931 fused_ordering(297) 00:11:40.931 fused_ordering(298) 00:11:40.931 fused_ordering(299) 00:11:40.931 fused_ordering(300) 00:11:40.931 fused_ordering(301) 00:11:40.931 fused_ordering(302) 00:11:40.931 fused_ordering(303) 00:11:40.931 fused_ordering(304) 00:11:40.931 fused_ordering(305) 00:11:40.931 fused_ordering(306) 00:11:40.931 fused_ordering(307) 00:11:40.931 fused_ordering(308) 00:11:40.931 fused_ordering(309) 00:11:40.931 fused_ordering(310) 00:11:40.931 fused_ordering(311) 00:11:40.931 fused_ordering(312) 00:11:40.931 fused_ordering(313) 00:11:40.931 fused_ordering(314) 00:11:40.931 fused_ordering(315) 00:11:40.931 fused_ordering(316) 00:11:40.931 fused_ordering(317) 00:11:40.931 fused_ordering(318) 00:11:40.931 fused_ordering(319) 00:11:40.931 fused_ordering(320) 00:11:40.931 fused_ordering(321) 00:11:40.931 fused_ordering(322) 00:11:40.931 fused_ordering(323) 00:11:40.931 fused_ordering(324) 00:11:40.931 fused_ordering(325) 00:11:40.931 fused_ordering(326) 00:11:40.931 fused_ordering(327) 00:11:40.931 fused_ordering(328) 00:11:40.931 fused_ordering(329) 00:11:40.931 fused_ordering(330) 00:11:40.931 fused_ordering(331) 00:11:40.931 fused_ordering(332) 00:11:40.931 fused_ordering(333) 00:11:40.931 fused_ordering(334) 00:11:40.931 fused_ordering(335) 00:11:40.931 fused_ordering(336) 00:11:40.931 fused_ordering(337) 00:11:40.931 fused_ordering(338) 00:11:40.931 fused_ordering(339) 00:11:40.931 fused_ordering(340) 00:11:40.931 fused_ordering(341) 00:11:40.931 fused_ordering(342) 00:11:40.931 fused_ordering(343) 00:11:40.931 fused_ordering(344) 00:11:40.931 fused_ordering(345) 00:11:40.931 fused_ordering(346) 00:11:40.931 fused_ordering(347) 00:11:40.931 fused_ordering(348) 00:11:40.931 fused_ordering(349) 00:11:40.931 fused_ordering(350) 00:11:40.931 fused_ordering(351) 00:11:40.931 fused_ordering(352) 00:11:40.932 fused_ordering(353) 00:11:40.932 fused_ordering(354) 00:11:40.932 fused_ordering(355) 00:11:40.932 fused_ordering(356) 00:11:40.932 fused_ordering(357) 00:11:40.932 fused_ordering(358) 00:11:40.932 fused_ordering(359) 00:11:40.932 fused_ordering(360) 00:11:40.932 fused_ordering(361) 00:11:40.932 fused_ordering(362) 00:11:40.932 fused_ordering(363) 00:11:40.932 fused_ordering(364) 00:11:40.932 fused_ordering(365) 00:11:40.932 fused_ordering(366) 00:11:40.932 fused_ordering(367) 00:11:40.932 fused_ordering(368) 00:11:40.932 fused_ordering(369) 00:11:40.932 fused_ordering(370) 00:11:40.932 fused_ordering(371) 00:11:40.932 fused_ordering(372) 00:11:40.932 fused_ordering(373) 00:11:40.932 fused_ordering(374) 00:11:40.932 fused_ordering(375) 00:11:40.932 fused_ordering(376) 00:11:40.932 fused_ordering(377) 00:11:40.932 fused_ordering(378) 00:11:40.932 fused_ordering(379) 00:11:40.932 fused_ordering(380) 00:11:40.932 fused_ordering(381) 00:11:40.932 fused_ordering(382) 00:11:40.932 fused_ordering(383) 00:11:40.932 fused_ordering(384) 00:11:40.932 fused_ordering(385) 00:11:40.932 fused_ordering(386) 00:11:40.932 fused_ordering(387) 00:11:40.932 fused_ordering(388) 00:11:40.932 fused_ordering(389) 00:11:40.932 fused_ordering(390) 00:11:40.932 fused_ordering(391) 00:11:40.932 fused_ordering(392) 00:11:40.932 fused_ordering(393) 00:11:40.932 fused_ordering(394) 00:11:40.932 fused_ordering(395) 00:11:40.932 fused_ordering(396) 00:11:40.932 fused_ordering(397) 00:11:40.932 fused_ordering(398) 00:11:40.932 fused_ordering(399) 00:11:40.932 fused_ordering(400) 00:11:40.932 fused_ordering(401) 00:11:40.932 fused_ordering(402) 00:11:40.932 fused_ordering(403) 00:11:40.932 fused_ordering(404) 00:11:40.932 fused_ordering(405) 00:11:40.932 fused_ordering(406) 00:11:40.932 fused_ordering(407) 00:11:40.932 fused_ordering(408) 00:11:40.932 fused_ordering(409) 00:11:40.932 fused_ordering(410) 00:11:40.932 fused_ordering(411) 00:11:40.932 fused_ordering(412) 00:11:40.932 fused_ordering(413) 00:11:40.932 fused_ordering(414) 00:11:40.932 fused_ordering(415) 00:11:40.932 fused_ordering(416) 00:11:40.932 fused_ordering(417) 00:11:40.932 fused_ordering(418) 00:11:40.932 fused_ordering(419) 00:11:40.932 fused_ordering(420) 00:11:40.932 fused_ordering(421) 00:11:40.932 fused_ordering(422) 00:11:40.932 fused_ordering(423) 00:11:40.932 fused_ordering(424) 00:11:40.932 fused_ordering(425) 00:11:40.932 fused_ordering(426) 00:11:40.932 fused_ordering(427) 00:11:40.932 fused_ordering(428) 00:11:40.932 fused_ordering(429) 00:11:40.932 fused_ordering(430) 00:11:40.932 fused_ordering(431) 00:11:40.932 fused_ordering(432) 00:11:40.932 fused_ordering(433) 00:11:40.932 fused_ordering(434) 00:11:40.932 fused_ordering(435) 00:11:40.932 fused_ordering(436) 00:11:40.932 fused_ordering(437) 00:11:40.932 fused_ordering(438) 00:11:40.932 fused_ordering(439) 00:11:40.932 fused_ordering(440) 00:11:40.932 fused_ordering(441) 00:11:40.932 fused_ordering(442) 00:11:40.932 fused_ordering(443) 00:11:40.932 fused_ordering(444) 00:11:40.932 fused_ordering(445) 00:11:40.932 fused_ordering(446) 00:11:40.932 fused_ordering(447) 00:11:40.932 fused_ordering(448) 00:11:40.932 fused_ordering(449) 00:11:40.932 fused_ordering(450) 00:11:40.932 fused_ordering(451) 00:11:40.932 fused_ordering(452) 00:11:40.932 fused_ordering(453) 00:11:40.932 fused_ordering(454) 00:11:40.932 fused_ordering(455) 00:11:40.932 fused_ordering(456) 00:11:40.932 fused_ordering(457) 00:11:40.932 fused_ordering(458) 00:11:40.932 fused_ordering(459) 00:11:40.932 fused_ordering(460) 00:11:40.932 fused_ordering(461) 00:11:40.932 fused_ordering(462) 00:11:40.932 fused_ordering(463) 00:11:40.932 fused_ordering(464) 00:11:40.932 fused_ordering(465) 00:11:40.932 fused_ordering(466) 00:11:40.932 fused_ordering(467) 00:11:40.932 fused_ordering(468) 00:11:40.932 fused_ordering(469) 00:11:40.932 fused_ordering(470) 00:11:40.932 fused_ordering(471) 00:11:40.932 fused_ordering(472) 00:11:40.932 fused_ordering(473) 00:11:40.932 fused_ordering(474) 00:11:40.932 fused_ordering(475) 00:11:40.932 fused_ordering(476) 00:11:40.932 fused_ordering(477) 00:11:40.932 fused_ordering(478) 00:11:40.932 fused_ordering(479) 00:11:40.932 fused_ordering(480) 00:11:40.932 fused_ordering(481) 00:11:40.932 fused_ordering(482) 00:11:40.932 fused_ordering(483) 00:11:40.932 fused_ordering(484) 00:11:40.932 fused_ordering(485) 00:11:40.932 fused_ordering(486) 00:11:40.932 fused_ordering(487) 00:11:40.932 fused_ordering(488) 00:11:40.932 fused_ordering(489) 00:11:40.932 fused_ordering(490) 00:11:40.932 fused_ordering(491) 00:11:40.932 fused_ordering(492) 00:11:40.932 fused_ordering(493) 00:11:40.932 fused_ordering(494) 00:11:40.932 fused_ordering(495) 00:11:40.932 fused_ordering(496) 00:11:40.932 fused_ordering(497) 00:11:40.932 fused_ordering(498) 00:11:40.932 fused_ordering(499) 00:11:40.932 fused_ordering(500) 00:11:40.932 fused_ordering(501) 00:11:40.932 fused_ordering(502) 00:11:40.932 fused_ordering(503) 00:11:40.932 fused_ordering(504) 00:11:40.932 fused_ordering(505) 00:11:40.932 fused_ordering(506) 00:11:40.932 fused_ordering(507) 00:11:40.932 fused_ordering(508) 00:11:40.932 fused_ordering(509) 00:11:40.932 fused_ordering(510) 00:11:40.932 fused_ordering(511) 00:11:40.932 fused_ordering(512) 00:11:40.932 fused_ordering(513) 00:11:40.932 fused_ordering(514) 00:11:40.932 fused_ordering(515) 00:11:40.932 fused_ordering(516) 00:11:40.932 fused_ordering(517) 00:11:40.932 fused_ordering(518) 00:11:40.932 fused_ordering(519) 00:11:40.932 fused_ordering(520) 00:11:40.932 fused_ordering(521) 00:11:40.932 fused_ordering(522) 00:11:40.932 fused_ordering(523) 00:11:40.932 fused_ordering(524) 00:11:40.932 fused_ordering(525) 00:11:40.932 fused_ordering(526) 00:11:40.932 fused_ordering(527) 00:11:40.932 fused_ordering(528) 00:11:40.932 fused_ordering(529) 00:11:40.932 fused_ordering(530) 00:11:40.932 fused_ordering(531) 00:11:40.932 fused_ordering(532) 00:11:40.932 fused_ordering(533) 00:11:40.932 fused_ordering(534) 00:11:40.932 fused_ordering(535) 00:11:40.932 fused_ordering(536) 00:11:40.932 fused_ordering(537) 00:11:40.932 fused_ordering(538) 00:11:40.932 fused_ordering(539) 00:11:40.932 fused_ordering(540) 00:11:40.932 fused_ordering(541) 00:11:40.932 fused_ordering(542) 00:11:40.932 fused_ordering(543) 00:11:40.932 fused_ordering(544) 00:11:40.932 fused_ordering(545) 00:11:40.932 fused_ordering(546) 00:11:40.932 fused_ordering(547) 00:11:40.932 fused_ordering(548) 00:11:40.932 fused_ordering(549) 00:11:40.932 fused_ordering(550) 00:11:40.932 fused_ordering(551) 00:11:40.932 fused_ordering(552) 00:11:40.932 fused_ordering(553) 00:11:40.932 fused_ordering(554) 00:11:40.932 fused_ordering(555) 00:11:40.932 fused_ordering(556) 00:11:40.932 fused_ordering(557) 00:11:40.932 fused_ordering(558) 00:11:40.932 fused_ordering(559) 00:11:40.932 fused_ordering(560) 00:11:40.932 fused_ordering(561) 00:11:40.932 fused_ordering(562) 00:11:40.932 fused_ordering(563) 00:11:40.932 fused_ordering(564) 00:11:40.932 fused_ordering(565) 00:11:40.932 fused_ordering(566) 00:11:40.932 fused_ordering(567) 00:11:40.932 fused_ordering(568) 00:11:40.932 fused_ordering(569) 00:11:40.932 fused_ordering(570) 00:11:40.932 fused_ordering(571) 00:11:40.932 fused_ordering(572) 00:11:40.932 fused_ordering(573) 00:11:40.932 fused_ordering(574) 00:11:40.932 fused_ordering(575) 00:11:40.932 fused_ordering(576) 00:11:40.932 fused_ordering(577) 00:11:40.932 fused_ordering(578) 00:11:40.932 fused_ordering(579) 00:11:40.932 fused_ordering(580) 00:11:40.932 fused_ordering(581) 00:11:40.932 fused_ordering(582) 00:11:40.932 fused_ordering(583) 00:11:40.932 fused_ordering(584) 00:11:40.932 fused_ordering(585) 00:11:40.932 fused_ordering(586) 00:11:40.932 fused_ordering(587) 00:11:40.932 fused_ordering(588) 00:11:40.932 fused_ordering(589) 00:11:40.932 fused_ordering(590) 00:11:40.932 fused_ordering(591) 00:11:40.932 fused_ordering(592) 00:11:40.932 fused_ordering(593) 00:11:40.932 fused_ordering(594) 00:11:40.932 fused_ordering(595) 00:11:40.932 fused_ordering(596) 00:11:40.932 fused_ordering(597) 00:11:40.932 fused_ordering(598) 00:11:40.932 fused_ordering(599) 00:11:40.932 fused_ordering(600) 00:11:40.932 fused_ordering(601) 00:11:40.932 fused_ordering(602) 00:11:40.932 fused_ordering(603) 00:11:40.932 fused_ordering(604) 00:11:40.932 fused_ordering(605) 00:11:40.932 fused_ordering(606) 00:11:40.932 fused_ordering(607) 00:11:40.932 fused_ordering(608) 00:11:40.932 fused_ordering(609) 00:11:40.932 fused_ordering(610) 00:11:40.932 fused_ordering(611) 00:11:40.932 fused_ordering(612) 00:11:40.932 fused_ordering(613) 00:11:40.932 fused_ordering(614) 00:11:40.932 fused_ordering(615) 00:11:41.193 fused_ordering(616) 00:11:41.193 fused_ordering(617) 00:11:41.193 fused_ordering(618) 00:11:41.193 fused_ordering(619) 00:11:41.193 fused_ordering(620) 00:11:41.193 fused_ordering(621) 00:11:41.193 fused_ordering(622) 00:11:41.193 fused_ordering(623) 00:11:41.193 fused_ordering(624) 00:11:41.193 fused_ordering(625) 00:11:41.193 fused_ordering(626) 00:11:41.193 fused_ordering(627) 00:11:41.193 fused_ordering(628) 00:11:41.193 fused_ordering(629) 00:11:41.193 fused_ordering(630) 00:11:41.193 fused_ordering(631) 00:11:41.193 fused_ordering(632) 00:11:41.193 fused_ordering(633) 00:11:41.193 fused_ordering(634) 00:11:41.193 fused_ordering(635) 00:11:41.193 fused_ordering(636) 00:11:41.193 fused_ordering(637) 00:11:41.193 fused_ordering(638) 00:11:41.193 fused_ordering(639) 00:11:41.193 fused_ordering(640) 00:11:41.193 fused_ordering(641) 00:11:41.193 fused_ordering(642) 00:11:41.193 fused_ordering(643) 00:11:41.193 fused_ordering(644) 00:11:41.193 fused_ordering(645) 00:11:41.193 fused_ordering(646) 00:11:41.193 fused_ordering(647) 00:11:41.193 fused_ordering(648) 00:11:41.193 fused_ordering(649) 00:11:41.193 fused_ordering(650) 00:11:41.193 fused_ordering(651) 00:11:41.193 fused_ordering(652) 00:11:41.193 fused_ordering(653) 00:11:41.193 fused_ordering(654) 00:11:41.193 fused_ordering(655) 00:11:41.193 fused_ordering(656) 00:11:41.193 fused_ordering(657) 00:11:41.193 fused_ordering(658) 00:11:41.193 fused_ordering(659) 00:11:41.193 fused_ordering(660) 00:11:41.193 fused_ordering(661) 00:11:41.193 fused_ordering(662) 00:11:41.193 fused_ordering(663) 00:11:41.193 fused_ordering(664) 00:11:41.193 fused_ordering(665) 00:11:41.193 fused_ordering(666) 00:11:41.193 fused_ordering(667) 00:11:41.193 fused_ordering(668) 00:11:41.193 fused_ordering(669) 00:11:41.193 fused_ordering(670) 00:11:41.193 fused_ordering(671) 00:11:41.193 fused_ordering(672) 00:11:41.193 fused_ordering(673) 00:11:41.193 fused_ordering(674) 00:11:41.193 fused_ordering(675) 00:11:41.193 fused_ordering(676) 00:11:41.193 fused_ordering(677) 00:11:41.193 fused_ordering(678) 00:11:41.193 fused_ordering(679) 00:11:41.193 fused_ordering(680) 00:11:41.193 fused_ordering(681) 00:11:41.193 fused_ordering(682) 00:11:41.193 fused_ordering(683) 00:11:41.193 fused_ordering(684) 00:11:41.193 fused_ordering(685) 00:11:41.193 fused_ordering(686) 00:11:41.193 fused_ordering(687) 00:11:41.193 fused_ordering(688) 00:11:41.193 fused_ordering(689) 00:11:41.193 fused_ordering(690) 00:11:41.193 fused_ordering(691) 00:11:41.193 fused_ordering(692) 00:11:41.193 fused_ordering(693) 00:11:41.193 fused_ordering(694) 00:11:41.193 fused_ordering(695) 00:11:41.193 fused_ordering(696) 00:11:41.193 fused_ordering(697) 00:11:41.193 fused_ordering(698) 00:11:41.193 fused_ordering(699) 00:11:41.193 fused_ordering(700) 00:11:41.193 fused_ordering(701) 00:11:41.193 fused_ordering(702) 00:11:41.193 fused_ordering(703) 00:11:41.193 fused_ordering(704) 00:11:41.193 fused_ordering(705) 00:11:41.193 fused_ordering(706) 00:11:41.193 fused_ordering(707) 00:11:41.193 fused_ordering(708) 00:11:41.193 fused_ordering(709) 00:11:41.193 fused_ordering(710) 00:11:41.193 fused_ordering(711) 00:11:41.193 fused_ordering(712) 00:11:41.193 fused_ordering(713) 00:11:41.193 fused_ordering(714) 00:11:41.193 fused_ordering(715) 00:11:41.193 fused_ordering(716) 00:11:41.193 fused_ordering(717) 00:11:41.193 fused_ordering(718) 00:11:41.193 fused_ordering(719) 00:11:41.193 fused_ordering(720) 00:11:41.193 fused_ordering(721) 00:11:41.193 fused_ordering(722) 00:11:41.193 fused_ordering(723) 00:11:41.193 fused_ordering(724) 00:11:41.193 fused_ordering(725) 00:11:41.193 fused_ordering(726) 00:11:41.193 fused_ordering(727) 00:11:41.193 fused_ordering(728) 00:11:41.193 fused_ordering(729) 00:11:41.193 fused_ordering(730) 00:11:41.193 fused_ordering(731) 00:11:41.193 fused_ordering(732) 00:11:41.193 fused_ordering(733) 00:11:41.193 fused_ordering(734) 00:11:41.193 fused_ordering(735) 00:11:41.193 fused_ordering(736) 00:11:41.193 fused_ordering(737) 00:11:41.193 fused_ordering(738) 00:11:41.193 fused_ordering(739) 00:11:41.193 fused_ordering(740) 00:11:41.193 fused_ordering(741) 00:11:41.193 fused_ordering(742) 00:11:41.193 fused_ordering(743) 00:11:41.193 fused_ordering(744) 00:11:41.193 fused_ordering(745) 00:11:41.193 fused_ordering(746) 00:11:41.193 fused_ordering(747) 00:11:41.193 fused_ordering(748) 00:11:41.193 fused_ordering(749) 00:11:41.193 fused_ordering(750) 00:11:41.193 fused_ordering(751) 00:11:41.193 fused_ordering(752) 00:11:41.193 fused_ordering(753) 00:11:41.193 fused_ordering(754) 00:11:41.193 fused_ordering(755) 00:11:41.193 fused_ordering(756) 00:11:41.193 fused_ordering(757) 00:11:41.193 fused_ordering(758) 00:11:41.193 fused_ordering(759) 00:11:41.193 fused_ordering(760) 00:11:41.193 fused_ordering(761) 00:11:41.193 fused_ordering(762) 00:11:41.193 fused_ordering(763) 00:11:41.193 fused_ordering(764) 00:11:41.193 fused_ordering(765) 00:11:41.193 fused_ordering(766) 00:11:41.193 fused_ordering(767) 00:11:41.193 fused_ordering(768) 00:11:41.193 fused_ordering(769) 00:11:41.193 fused_ordering(770) 00:11:41.193 fused_ordering(771) 00:11:41.193 fused_ordering(772) 00:11:41.193 fused_ordering(773) 00:11:41.193 fused_ordering(774) 00:11:41.193 fused_ordering(775) 00:11:41.193 fused_ordering(776) 00:11:41.193 fused_ordering(777) 00:11:41.193 fused_ordering(778) 00:11:41.193 fused_ordering(779) 00:11:41.193 fused_ordering(780) 00:11:41.193 fused_ordering(781) 00:11:41.193 fused_ordering(782) 00:11:41.193 fused_ordering(783) 00:11:41.193 fused_ordering(784) 00:11:41.193 fused_ordering(785) 00:11:41.193 fused_ordering(786) 00:11:41.193 fused_ordering(787) 00:11:41.193 fused_ordering(788) 00:11:41.193 fused_ordering(789) 00:11:41.193 fused_ordering(790) 00:11:41.193 fused_ordering(791) 00:11:41.193 fused_ordering(792) 00:11:41.193 fused_ordering(793) 00:11:41.193 fused_ordering(794) 00:11:41.193 fused_ordering(795) 00:11:41.193 fused_ordering(796) 00:11:41.193 fused_ordering(797) 00:11:41.193 fused_ordering(798) 00:11:41.193 fused_ordering(799) 00:11:41.193 fused_ordering(800) 00:11:41.193 fused_ordering(801) 00:11:41.193 fused_ordering(802) 00:11:41.193 fused_ordering(803) 00:11:41.193 fused_ordering(804) 00:11:41.193 fused_ordering(805) 00:11:41.194 fused_ordering(806) 00:11:41.194 fused_ordering(807) 00:11:41.194 fused_ordering(808) 00:11:41.194 fused_ordering(809) 00:11:41.194 fused_ordering(810) 00:11:41.194 fused_ordering(811) 00:11:41.194 fused_ordering(812) 00:11:41.194 fused_ordering(813) 00:11:41.194 fused_ordering(814) 00:11:41.194 fused_ordering(815) 00:11:41.194 fused_ordering(816) 00:11:41.194 fused_ordering(817) 00:11:41.194 fused_ordering(818) 00:11:41.194 fused_ordering(819) 00:11:41.194 fused_ordering(820) 00:11:41.194 fused_ordering(821) 00:11:41.194 fused_ordering(822) 00:11:41.194 fused_ordering(823) 00:11:41.194 fused_ordering(824) 00:11:41.194 fused_ordering(825) 00:11:41.194 fused_ordering(826) 00:11:41.194 fused_ordering(827) 00:11:41.194 fused_ordering(828) 00:11:41.194 fused_ordering(829) 00:11:41.194 fused_ordering(830) 00:11:41.194 fused_ordering(831) 00:11:41.194 fused_ordering(832) 00:11:41.194 fused_ordering(833) 00:11:41.194 fused_ordering(834) 00:11:41.194 fused_ordering(835) 00:11:41.194 fused_ordering(836) 00:11:41.194 fused_ordering(837) 00:11:41.194 fused_ordering(838) 00:11:41.194 fused_ordering(839) 00:11:41.194 fused_ordering(840) 00:11:41.194 fused_ordering(841) 00:11:41.194 fused_ordering(842) 00:11:41.194 fused_ordering(843) 00:11:41.194 fused_ordering(844) 00:11:41.194 fused_ordering(845) 00:11:41.194 fused_ordering(846) 00:11:41.194 fused_ordering(847) 00:11:41.194 fused_ordering(848) 00:11:41.194 fused_ordering(849) 00:11:41.194 fused_ordering(850) 00:11:41.194 fused_ordering(851) 00:11:41.194 fused_ordering(852) 00:11:41.194 fused_ordering(853) 00:11:41.194 fused_ordering(854) 00:11:41.194 fused_ordering(855) 00:11:41.194 fused_ordering(856) 00:11:41.194 fused_ordering(857) 00:11:41.194 fused_ordering(858) 00:11:41.194 fused_ordering(859) 00:11:41.194 fused_ordering(860) 00:11:41.194 fused_ordering(861) 00:11:41.194 fused_ordering(862) 00:11:41.194 fused_ordering(863) 00:11:41.194 fused_ordering(864) 00:11:41.194 fused_ordering(865) 00:11:41.194 fused_ordering(866) 00:11:41.194 fused_ordering(867) 00:11:41.194 fused_ordering(868) 00:11:41.194 fused_ordering(869) 00:11:41.194 fused_ordering(870) 00:11:41.194 fused_ordering(871) 00:11:41.194 fused_ordering(872) 00:11:41.194 fused_ordering(873) 00:11:41.194 fused_ordering(874) 00:11:41.194 fused_ordering(875) 00:11:41.194 fused_ordering(876) 00:11:41.194 fused_ordering(877) 00:11:41.194 fused_ordering(878) 00:11:41.194 fused_ordering(879) 00:11:41.194 fused_ordering(880) 00:11:41.194 fused_ordering(881) 00:11:41.194 fused_ordering(882) 00:11:41.194 fused_ordering(883) 00:11:41.194 fused_ordering(884) 00:11:41.194 fused_ordering(885) 00:11:41.194 fused_ordering(886) 00:11:41.194 fused_ordering(887) 00:11:41.194 fused_ordering(888) 00:11:41.194 fused_ordering(889) 00:11:41.194 fused_ordering(890) 00:11:41.194 fused_ordering(891) 00:11:41.194 fused_ordering(892) 00:11:41.194 fused_ordering(893) 00:11:41.194 fused_ordering(894) 00:11:41.194 fused_ordering(895) 00:11:41.194 fused_ordering(896) 00:11:41.194 fused_ordering(897) 00:11:41.194 fused_ordering(898) 00:11:41.194 fused_ordering(899) 00:11:41.194 fused_ordering(900) 00:11:41.194 fused_ordering(901) 00:11:41.194 fused_ordering(902) 00:11:41.194 fused_ordering(903) 00:11:41.194 fused_ordering(904) 00:11:41.194 fused_ordering(905) 00:11:41.194 fused_ordering(906) 00:11:41.194 fused_ordering(907) 00:11:41.194 fused_ordering(908) 00:11:41.194 fused_ordering(909) 00:11:41.194 fused_ordering(910) 00:11:41.194 fused_ordering(911) 00:11:41.194 fused_ordering(912) 00:11:41.194 fused_ordering(913) 00:11:41.194 fused_ordering(914) 00:11:41.194 fused_ordering(915) 00:11:41.194 fused_ordering(916) 00:11:41.194 fused_ordering(917) 00:11:41.194 fused_ordering(918) 00:11:41.194 fused_ordering(919) 00:11:41.194 fused_ordering(920) 00:11:41.194 fused_ordering(921) 00:11:41.194 fused_ordering(922) 00:11:41.194 fused_ordering(923) 00:11:41.194 fused_ordering(924) 00:11:41.194 fused_ordering(925) 00:11:41.194 fused_ordering(926) 00:11:41.194 fused_ordering(927) 00:11:41.194 fused_ordering(928) 00:11:41.194 fused_ordering(929) 00:11:41.194 fused_ordering(930) 00:11:41.194 fused_ordering(931) 00:11:41.194 fused_ordering(932) 00:11:41.194 fused_ordering(933) 00:11:41.194 fused_ordering(934) 00:11:41.194 fused_ordering(935) 00:11:41.194 fused_ordering(936) 00:11:41.194 fused_ordering(937) 00:11:41.194 fused_ordering(938) 00:11:41.194 fused_ordering(939) 00:11:41.194 fused_ordering(940) 00:11:41.194 fused_ordering(941) 00:11:41.194 fused_ordering(942) 00:11:41.194 fused_ordering(943) 00:11:41.194 fused_ordering(944) 00:11:41.194 fused_ordering(945) 00:11:41.194 fused_ordering(946) 00:11:41.194 fused_ordering(947) 00:11:41.194 fused_ordering(948) 00:11:41.194 fused_ordering(949) 00:11:41.194 fused_ordering(950) 00:11:41.194 fused_ordering(951) 00:11:41.194 fused_ordering(952) 00:11:41.194 fused_ordering(953) 00:11:41.194 fused_ordering(954) 00:11:41.194 fused_ordering(955) 00:11:41.194 fused_ordering(956) 00:11:41.194 fused_ordering(957) 00:11:41.194 fused_ordering(958) 00:11:41.194 fused_ordering(959) 00:11:41.194 fused_ordering(960) 00:11:41.194 fused_ordering(961) 00:11:41.194 fused_ordering(962) 00:11:41.194 fused_ordering(963) 00:11:41.194 fused_ordering(964) 00:11:41.194 fused_ordering(965) 00:11:41.194 fused_ordering(966) 00:11:41.194 fused_ordering(967) 00:11:41.194 fused_ordering(968) 00:11:41.194 fused_ordering(969) 00:11:41.194 fused_ordering(970) 00:11:41.194 fused_ordering(971) 00:11:41.194 fused_ordering(972) 00:11:41.194 fused_ordering(973) 00:11:41.194 fused_ordering(974) 00:11:41.194 fused_ordering(975) 00:11:41.194 fused_ordering(976) 00:11:41.194 fused_ordering(977) 00:11:41.194 fused_ordering(978) 00:11:41.194 fused_ordering(979) 00:11:41.194 fused_ordering(980) 00:11:41.194 fused_ordering(981) 00:11:41.194 fused_ordering(982) 00:11:41.194 fused_ordering(983) 00:11:41.194 fused_ordering(984) 00:11:41.194 fused_ordering(985) 00:11:41.194 fused_ordering(986) 00:11:41.194 fused_ordering(987) 00:11:41.194 fused_ordering(988) 00:11:41.194 fused_ordering(989) 00:11:41.194 fused_ordering(990) 00:11:41.194 fused_ordering(991) 00:11:41.194 fused_ordering(992) 00:11:41.194 fused_ordering(993) 00:11:41.194 fused_ordering(994) 00:11:41.194 fused_ordering(995) 00:11:41.194 fused_ordering(996) 00:11:41.194 fused_ordering(997) 00:11:41.194 fused_ordering(998) 00:11:41.194 fused_ordering(999) 00:11:41.194 fused_ordering(1000) 00:11:41.194 fused_ordering(1001) 00:11:41.194 fused_ordering(1002) 00:11:41.194 fused_ordering(1003) 00:11:41.194 fused_ordering(1004) 00:11:41.194 fused_ordering(1005) 00:11:41.194 fused_ordering(1006) 00:11:41.194 fused_ordering(1007) 00:11:41.194 fused_ordering(1008) 00:11:41.194 fused_ordering(1009) 00:11:41.194 fused_ordering(1010) 00:11:41.194 fused_ordering(1011) 00:11:41.194 fused_ordering(1012) 00:11:41.194 fused_ordering(1013) 00:11:41.194 fused_ordering(1014) 00:11:41.194 fused_ordering(1015) 00:11:41.194 fused_ordering(1016) 00:11:41.194 fused_ordering(1017) 00:11:41.194 fused_ordering(1018) 00:11:41.194 fused_ordering(1019) 00:11:41.194 fused_ordering(1020) 00:11:41.194 fused_ordering(1021) 00:11:41.194 fused_ordering(1022) 00:11:41.194 fused_ordering(1023) 00:11:41.194 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:41.194 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:41.194 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:41.194 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:41.194 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:41.194 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:41.194 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:41.194 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.194 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:41.194 rmmod nvme_rdma 00:11:41.454 rmmod nvme_fabrics 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3844265 ']' 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3844265 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3844265 ']' 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3844265 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3844265 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3844265' 00:11:41.454 killing process with pid 3844265 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3844265 00:11:41.454 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3844265 00:11:41.715 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:41.715 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:41.715 00:11:41.715 real 0m6.657s 00:11:41.715 user 0m3.357s 00:11:41.715 sys 0m4.280s 00:11:41.715 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.715 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:41.715 ************************************ 00:11:41.715 END TEST nvmf_fused_ordering 00:11:41.715 ************************************ 00:11:41.715 21:08:28 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:11:41.715 21:08:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:41.715 21:08:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.715 21:08:28 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.715 ************************************ 00:11:41.715 START TEST nvmf_ns_masking 00:11:41.715 ************************************ 00:11:41.715 21:08:28 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:11:41.715 * Looking for test storage... 00:11:41.715 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:41.715 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:41.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.716 --rc genhtml_branch_coverage=1 00:11:41.716 --rc genhtml_function_coverage=1 00:11:41.716 --rc genhtml_legend=1 00:11:41.716 --rc geninfo_all_blocks=1 00:11:41.716 --rc geninfo_unexecuted_blocks=1 00:11:41.716 00:11:41.716 ' 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:41.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.716 --rc genhtml_branch_coverage=1 00:11:41.716 --rc genhtml_function_coverage=1 00:11:41.716 --rc genhtml_legend=1 00:11:41.716 --rc geninfo_all_blocks=1 00:11:41.716 --rc geninfo_unexecuted_blocks=1 00:11:41.716 00:11:41.716 ' 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:41.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.716 --rc genhtml_branch_coverage=1 00:11:41.716 --rc genhtml_function_coverage=1 00:11:41.716 --rc genhtml_legend=1 00:11:41.716 --rc geninfo_all_blocks=1 00:11:41.716 --rc geninfo_unexecuted_blocks=1 00:11:41.716 00:11:41.716 ' 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:41.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.716 --rc genhtml_branch_coverage=1 00:11:41.716 --rc genhtml_function_coverage=1 00:11:41.716 --rc genhtml_legend=1 00:11:41.716 --rc geninfo_all_blocks=1 00:11:41.716 --rc geninfo_unexecuted_blocks=1 00:11:41.716 00:11:41.716 ' 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:41.716 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:41.716 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0abc0881-e7b5-4b86-825e-bdabf780a8fe 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7de313d6-9821-4370-bf4b-ade8cd6217d2 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1ef96a83-1cc1-4d31-b247-12dc98775616 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:41.974 21:08:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.249 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:11:47.250 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:11:47.250 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:47.250 Found net devices under 0000:18:00.0: mlx_0_0 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:47.250 Found net devices under 0000:18:00.1: mlx_0_1 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:47.250 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:47.250 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:11:47.250 altname enp24s0f0np0 00:11:47.250 altname ens785f0np0 00:11:47.250 inet 192.168.100.8/24 scope global mlx_0_0 00:11:47.250 valid_lft forever preferred_lft forever 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:47.250 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:47.250 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:47.250 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:11:47.250 altname enp24s0f1np1 00:11:47.251 altname ens785f1np1 00:11:47.251 inet 192.168.100.9/24 scope global mlx_0_1 00:11:47.251 valid_lft forever preferred_lft forever 00:11:47.251 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:47.251 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:47.251 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:47.251 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:47.251 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:47.251 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:47.251 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:47.251 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:47.251 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:47.251 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:47.509 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:47.509 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:47.509 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:47.509 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:47.509 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:47.510 192.168.100.9' 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:47.510 192.168.100.9' 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:47.510 192.168.100.9' 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3847753 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3847753 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3847753 ']' 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.510 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:47.510 [2024-11-26 21:08:34.809705] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:11:47.510 [2024-11-26 21:08:34.809748] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.510 [2024-11-26 21:08:34.868454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.510 [2024-11-26 21:08:34.906085] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.510 [2024-11-26 21:08:34.906117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.510 [2024-11-26 21:08:34.906123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.510 [2024-11-26 21:08:34.906131] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.510 [2024-11-26 21:08:34.906136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.510 [2024-11-26 21:08:34.906600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.769 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.769 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:47.769 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:47.769 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.769 21:08:34 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:47.769 21:08:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.769 21:08:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:48.028 [2024-11-26 21:08:35.206147] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fee2e0/0x1ff27d0) succeed. 00:11:48.028 [2024-11-26 21:08:35.214715] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fef790/0x2033e70) succeed. 00:11:48.028 21:08:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:48.028 21:08:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:48.028 21:08:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:48.028 Malloc1 00:11:48.028 21:08:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:48.286 Malloc2 00:11:48.286 21:08:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:48.544 21:08:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:48.544 21:08:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:48.803 [2024-11-26 21:08:36.124960] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:48.803 21:08:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:48.803 21:08:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1ef96a83-1cc1-4d31-b247-12dc98775616 -a 192.168.100.8 -s 4420 -i 4 00:11:49.062 21:08:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:49.062 21:08:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:49.062 21:08:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.062 21:08:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:49.062 21:08:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:51.599 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:51.599 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.599 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:51.599 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:51.599 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.599 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:51.599 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:51.599 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.600 [ 0]:0x1 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64d05dce2def41fda55deaafa578a547 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64d05dce2def41fda55deaafa578a547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:51.600 [ 0]:0x1 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64d05dce2def41fda55deaafa578a547 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64d05dce2def41fda55deaafa578a547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:51.600 [ 1]:0x2 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d46b6e7d238e4b08a4216cfa624d221c 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d46b6e7d238e4b08a4216cfa624d221c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:51.600 21:08:38 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.859 21:08:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.118 21:08:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:52.118 21:08:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:52.118 21:08:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1ef96a83-1cc1-4d31-b247-12dc98775616 -a 192.168.100.8 -s 4420 -i 4 00:11:52.377 21:08:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:52.377 21:08:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:52.377 21:08:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.377 21:08:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:52.377 21:08:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:52.377 21:08:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:54.914 [ 0]:0x2 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d46b6e7d238e4b08a4216cfa624d221c 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d46b6e7d238e4b08a4216cfa624d221c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.914 21:08:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:54.914 [ 0]:0x1 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64d05dce2def41fda55deaafa578a547 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64d05dce2def41fda55deaafa578a547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:54.914 [ 1]:0x2 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d46b6e7d238e4b08a4216cfa624d221c 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d46b6e7d238e4b08a4216cfa624d221c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.914 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:55.174 [ 0]:0x2 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d46b6e7d238e4b08a4216cfa624d221c 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d46b6e7d238e4b08a4216cfa624d221c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:55.174 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.433 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:55.693 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:55.693 21:08:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1ef96a83-1cc1-4d31-b247-12dc98775616 -a 192.168.100.8 -s 4420 -i 4 00:11:55.951 21:08:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:55.951 21:08:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:55.951 21:08:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:55.951 21:08:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:55.951 21:08:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:55.951 21:08:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:57.855 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:57.855 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:57.855 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.855 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:57.855 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.855 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:57.855 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:57.855 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:58.114 [ 0]:0x1 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=64d05dce2def41fda55deaafa578a547 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 64d05dce2def41fda55deaafa578a547 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:58.114 [ 1]:0x2 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d46b6e7d238e4b08a4216cfa624d221c 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d46b6e7d238e4b08a4216cfa624d221c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.114 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:58.374 [ 0]:0x2 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d46b6e7d238e4b08a4216cfa624d221c 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d46b6e7d238e4b08a4216cfa624d221c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:58.374 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:58.634 [2024-11-26 21:08:45.833023] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:58.634 request: 00:11:58.634 { 00:11:58.634 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:58.634 "nsid": 2, 00:11:58.634 "host": "nqn.2016-06.io.spdk:host1", 00:11:58.634 "method": "nvmf_ns_remove_host", 00:11:58.634 "req_id": 1 00:11:58.634 } 00:11:58.634 Got JSON-RPC error response 00:11:58.634 response: 00:11:58.634 { 00:11:58.634 "code": -32602, 00:11:58.634 "message": "Invalid parameters" 00:11:58.634 } 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:58.634 [ 0]:0x2 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d46b6e7d238e4b08a4216cfa624d221c 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d46b6e7d238e4b08a4216cfa624d221c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:58.634 21:08:45 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.894 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3849953 00:11:58.894 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:58.894 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.894 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3849953 /var/tmp/host.sock 00:11:58.894 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3849953 ']' 00:11:58.894 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:58.894 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.894 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:58.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:58.894 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.894 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:58.894 [2024-11-26 21:08:46.310645] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:11:58.894 [2024-11-26 21:08:46.310693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3849953 ] 00:11:59.154 [2024-11-26 21:08:46.371176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.154 [2024-11-26 21:08:46.408875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.414 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.414 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:59.414 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.414 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:59.674 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0abc0881-e7b5-4b86-825e-bdabf780a8fe 00:11:59.674 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:59.674 21:08:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0ABC0881E7B54B86825EBDABF780A8FE -i 00:11:59.933 21:08:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7de313d6-9821-4370-bf4b-ade8cd6217d2 00:11:59.933 21:08:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:59.933 21:08:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7DE313D698214370BF4BADE8CD6217D2 -i 00:11:59.933 21:08:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:00.192 21:08:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:00.452 21:08:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:00.452 21:08:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:00.452 nvme0n1 00:12:00.715 21:08:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:00.715 21:08:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:00.715 nvme1n2 00:12:00.715 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:00.715 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:00.715 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:00.715 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:00.715 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:01.030 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:01.030 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:01.030 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:01.030 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:01.309 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0abc0881-e7b5-4b86-825e-bdabf780a8fe == \0\a\b\c\0\8\8\1\-\e\7\b\5\-\4\b\8\6\-\8\2\5\e\-\b\d\a\b\f\7\8\0\a\8\f\e ]] 00:12:01.310 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:01.310 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:01.310 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:01.310 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7de313d6-9821-4370-bf4b-ade8cd6217d2 == \7\d\e\3\1\3\d\6\-\9\8\2\1\-\4\3\7\0\-\b\f\4\b\-\a\d\e\8\c\d\6\2\1\7\d\2 ]] 00:12:01.310 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.607 21:08:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:01.607 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 0abc0881-e7b5-4b86-825e-bdabf780a8fe 00:12:01.607 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:01.607 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0ABC0881E7B54B86825EBDABF780A8FE 00:12:01.607 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:01.607 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0ABC0881E7B54B86825EBDABF780A8FE 00:12:01.607 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:01.607 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.607 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:01.608 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.608 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:01.608 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:01.608 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:01.608 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:01.608 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0ABC0881E7B54B86825EBDABF780A8FE 00:12:01.866 [2024-11-26 21:08:49.196985] bdev.c:8437:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:01.866 [2024-11-26 21:08:49.197015] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:01.866 [2024-11-26 21:08:49.197023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:01.866 request: 00:12:01.866 { 00:12:01.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:01.866 "namespace": { 00:12:01.866 "bdev_name": "invalid", 00:12:01.866 "nsid": 1, 00:12:01.866 "nguid": "0ABC0881E7B54B86825EBDABF780A8FE", 00:12:01.866 "no_auto_visible": false 00:12:01.866 }, 00:12:01.866 "method": "nvmf_subsystem_add_ns", 00:12:01.866 "req_id": 1 00:12:01.866 } 00:12:01.866 Got JSON-RPC error response 00:12:01.866 response: 00:12:01.866 { 00:12:01.866 "code": -32602, 00:12:01.866 "message": "Invalid parameters" 00:12:01.866 } 00:12:01.866 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:01.866 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:01.866 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:01.866 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:01.866 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 0abc0881-e7b5-4b86-825e-bdabf780a8fe 00:12:01.866 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:01.866 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0ABC0881E7B54B86825EBDABF780A8FE -i 00:12:02.125 21:08:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:04.030 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:04.030 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:04.030 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3849953 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3849953 ']' 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3849953 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3849953 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3849953' 00:12:04.289 killing process with pid 3849953 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3849953 00:12:04.289 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3849953 00:12:04.547 21:08:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.807 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:04.807 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:04.807 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:04.808 rmmod nvme_rdma 00:12:04.808 rmmod nvme_fabrics 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3847753 ']' 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3847753 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3847753 ']' 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3847753 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3847753 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3847753' 00:12:04.808 killing process with pid 3847753 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3847753 00:12:04.808 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3847753 00:12:05.068 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.068 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:05.068 00:12:05.068 real 0m23.490s 00:12:05.068 user 0m30.119s 00:12:05.068 sys 0m6.167s 00:12:05.068 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.068 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:05.068 ************************************ 00:12:05.068 END TEST nvmf_ns_masking 00:12:05.068 ************************************ 00:12:05.068 21:08:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:05.068 21:08:52 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:05.068 21:08:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:05.068 21:08:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:05.068 21:08:52 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.329 ************************************ 00:12:05.329 START TEST nvmf_nvme_cli 00:12:05.329 ************************************ 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:05.329 * Looking for test storage... 00:12:05.329 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:05.329 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:05.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.330 --rc genhtml_branch_coverage=1 00:12:05.330 --rc genhtml_function_coverage=1 00:12:05.330 --rc genhtml_legend=1 00:12:05.330 --rc geninfo_all_blocks=1 00:12:05.330 --rc geninfo_unexecuted_blocks=1 00:12:05.330 00:12:05.330 ' 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:05.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.330 --rc genhtml_branch_coverage=1 00:12:05.330 --rc genhtml_function_coverage=1 00:12:05.330 --rc genhtml_legend=1 00:12:05.330 --rc geninfo_all_blocks=1 00:12:05.330 --rc geninfo_unexecuted_blocks=1 00:12:05.330 00:12:05.330 ' 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:05.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.330 --rc genhtml_branch_coverage=1 00:12:05.330 --rc genhtml_function_coverage=1 00:12:05.330 --rc genhtml_legend=1 00:12:05.330 --rc geninfo_all_blocks=1 00:12:05.330 --rc geninfo_unexecuted_blocks=1 00:12:05.330 00:12:05.330 ' 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:05.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:05.330 --rc genhtml_branch_coverage=1 00:12:05.330 --rc genhtml_function_coverage=1 00:12:05.330 --rc genhtml_legend=1 00:12:05.330 --rc geninfo_all_blocks=1 00:12:05.330 --rc geninfo_unexecuted_blocks=1 00:12:05.330 00:12:05.330 ' 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:05.330 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:05.330 21:08:52 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:10.610 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:10.610 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:10.610 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:10.611 Found net devices under 0000:18:00.0: mlx_0_0 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:10.611 Found net devices under 0000:18:00.1: mlx_0_1 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:10.611 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:10.611 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:12:10.611 altname enp24s0f0np0 00:12:10.611 altname ens785f0np0 00:12:10.611 inet 192.168.100.8/24 scope global mlx_0_0 00:12:10.611 valid_lft forever preferred_lft forever 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:10.611 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:10.611 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:12:10.611 altname enp24s0f1np1 00:12:10.611 altname ens785f1np1 00:12:10.611 inet 192.168.100.9/24 scope global mlx_0_1 00:12:10.611 valid_lft forever preferred_lft forever 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:10.611 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:10.612 192.168.100.9' 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:10.612 192.168.100.9' 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:10.612 192.168.100.9' 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3854300 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3854300 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3854300 ']' 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:10.612 21:08:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.612 [2024-11-26 21:08:58.017887] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:12:10.612 [2024-11-26 21:08:58.017943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.871 [2024-11-26 21:08:58.079058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:10.871 [2024-11-26 21:08:58.122787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.871 [2024-11-26 21:08:58.122823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.871 [2024-11-26 21:08:58.122830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.871 [2024-11-26 21:08:58.122836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.871 [2024-11-26 21:08:58.122841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.871 [2024-11-26 21:08:58.124223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.871 [2024-11-26 21:08:58.124320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:10.871 [2024-11-26 21:08:58.124340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:10.871 [2024-11-26 21:08:58.124342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.871 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:10.871 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:10.871 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:10.871 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:10.871 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.871 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.871 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:10.871 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.871 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:10.871 [2024-11-26 21:08:58.286207] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1132530/0x1136a20) succeed. 00:12:10.871 [2024-11-26 21:08:58.294425] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1133bc0/0x11780c0) succeed. 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.130 Malloc0 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.130 Malloc1 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.130 [2024-11-26 21:08:58.500399] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.130 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:12:11.390 00:12:11.390 Discovery Log Number of Records 2, Generation counter 2 00:12:11.390 =====Discovery Log Entry 0====== 00:12:11.390 trtype: rdma 00:12:11.390 adrfam: ipv4 00:12:11.390 subtype: current discovery subsystem 00:12:11.390 treq: not required 00:12:11.390 portid: 0 00:12:11.390 trsvcid: 4420 00:12:11.390 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:11.390 traddr: 192.168.100.8 00:12:11.390 eflags: explicit discovery connections, duplicate discovery information 00:12:11.390 rdma_prtype: not specified 00:12:11.390 rdma_qptype: connected 00:12:11.390 rdma_cms: rdma-cm 00:12:11.390 rdma_pkey: 0x0000 00:12:11.390 =====Discovery Log Entry 1====== 00:12:11.390 trtype: rdma 00:12:11.390 adrfam: ipv4 00:12:11.390 subtype: nvme subsystem 00:12:11.390 treq: not required 00:12:11.390 portid: 0 00:12:11.390 trsvcid: 4420 00:12:11.390 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:11.390 traddr: 192.168.100.8 00:12:11.390 eflags: none 00:12:11.390 rdma_prtype: not specified 00:12:11.390 rdma_qptype: connected 00:12:11.390 rdma_cms: rdma-cm 00:12:11.390 rdma_pkey: 0x0000 00:12:11.390 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:11.390 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:11.390 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:11.390 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.390 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:11.390 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:11.390 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.390 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:11.390 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:11.390 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:11.390 21:08:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:12.325 21:08:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:12.325 21:08:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:12.325 21:08:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.325 21:08:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:12.325 21:08:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:12.325 21:08:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:14.226 /dev/nvme0n2 ]] 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:14.226 21:09:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:15.604 rmmod nvme_rdma 00:12:15.604 rmmod nvme_fabrics 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3854300 ']' 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3854300 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3854300 ']' 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3854300 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3854300 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3854300' 00:12:15.604 killing process with pid 3854300 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3854300 00:12:15.604 21:09:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3854300 00:12:15.604 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:15.604 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:15.604 00:12:15.604 real 0m10.521s 00:12:15.604 user 0m20.823s 00:12:15.604 sys 0m4.585s 00:12:15.604 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.604 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.604 ************************************ 00:12:15.604 END TEST nvmf_nvme_cli 00:12:15.604 ************************************ 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:15.865 ************************************ 00:12:15.865 START TEST nvmf_auth_target 00:12:15.865 ************************************ 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:12:15.865 * Looking for test storage... 00:12:15.865 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.865 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:15.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.866 --rc genhtml_branch_coverage=1 00:12:15.866 --rc genhtml_function_coverage=1 00:12:15.866 --rc genhtml_legend=1 00:12:15.866 --rc geninfo_all_blocks=1 00:12:15.866 --rc geninfo_unexecuted_blocks=1 00:12:15.866 00:12:15.866 ' 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:15.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.866 --rc genhtml_branch_coverage=1 00:12:15.866 --rc genhtml_function_coverage=1 00:12:15.866 --rc genhtml_legend=1 00:12:15.866 --rc geninfo_all_blocks=1 00:12:15.866 --rc geninfo_unexecuted_blocks=1 00:12:15.866 00:12:15.866 ' 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:15.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.866 --rc genhtml_branch_coverage=1 00:12:15.866 --rc genhtml_function_coverage=1 00:12:15.866 --rc genhtml_legend=1 00:12:15.866 --rc geninfo_all_blocks=1 00:12:15.866 --rc geninfo_unexecuted_blocks=1 00:12:15.866 00:12:15.866 ' 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:15.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.866 --rc genhtml_branch_coverage=1 00:12:15.866 --rc genhtml_function_coverage=1 00:12:15.866 --rc genhtml_legend=1 00:12:15.866 --rc geninfo_all_blocks=1 00:12:15.866 --rc geninfo_unexecuted_blocks=1 00:12:15.866 00:12:15.866 ' 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.866 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.866 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.867 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.867 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.867 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:15.867 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:15.867 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.867 21:09:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.144 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:12:21.145 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:12:21.145 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:21.145 Found net devices under 0000:18:00.0: mlx_0_0 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:21.145 Found net devices under 0000:18:00.1: mlx_0_1 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:21.145 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:21.145 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:12:21.145 altname enp24s0f0np0 00:12:21.145 altname ens785f0np0 00:12:21.145 inet 192.168.100.8/24 scope global mlx_0_0 00:12:21.145 valid_lft forever preferred_lft forever 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:21.145 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:21.145 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:12:21.145 altname enp24s0f1np1 00:12:21.145 altname ens785f1np1 00:12:21.145 inet 192.168.100.9/24 scope global mlx_0_1 00:12:21.145 valid_lft forever preferred_lft forever 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:21.145 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:21.406 192.168.100.9' 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:21.406 192.168.100.9' 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:21.406 192.168.100.9' 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:21.406 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3859145 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3859145 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3859145 ']' 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.407 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3859170 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cee05b1a6b89a3760df3e392ac0e69ee64e4586202dfd0c5 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8hS 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cee05b1a6b89a3760df3e392ac0e69ee64e4586202dfd0c5 0 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cee05b1a6b89a3760df3e392ac0e69ee64e4586202dfd0c5 0 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cee05b1a6b89a3760df3e392ac0e69ee64e4586202dfd0c5 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8hS 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8hS 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.8hS 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=97948a1765df985671d5f579f0fa1d61991c3c6aa31349739a1344bf1e433aea 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9sv 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 97948a1765df985671d5f579f0fa1d61991c3c6aa31349739a1344bf1e433aea 3 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 97948a1765df985671d5f579f0fa1d61991c3c6aa31349739a1344bf1e433aea 3 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=97948a1765df985671d5f579f0fa1d61991c3c6aa31349739a1344bf1e433aea 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:21.666 21:09:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:21.666 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9sv 00:12:21.666 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9sv 00:12:21.666 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.9sv 00:12:21.666 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:12:21.666 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:21.666 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=97f6d7f483a79701c1be22f8da3929c0 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0op 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 97f6d7f483a79701c1be22f8da3929c0 1 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 97f6d7f483a79701c1be22f8da3929c0 1 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=97f6d7f483a79701c1be22f8da3929c0 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0op 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0op 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.0op 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b33ee7fb1d693e1138fddf727874820aa136192a8f03c71a 00:12:21.667 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0Jc 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b33ee7fb1d693e1138fddf727874820aa136192a8f03c71a 2 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b33ee7fb1d693e1138fddf727874820aa136192a8f03c71a 2 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b33ee7fb1d693e1138fddf727874820aa136192a8f03c71a 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0Jc 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0Jc 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.0Jc 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=856c9f8d3f7e244805cffbadd268c1fd09c790c9591ce73f 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.hgy 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 856c9f8d3f7e244805cffbadd268c1fd09c790c9591ce73f 2 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 856c9f8d3f7e244805cffbadd268c1fd09c790c9591ce73f 2 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=856c9f8d3f7e244805cffbadd268c1fd09c790c9591ce73f 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.hgy 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.hgy 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.hgy 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=48e43b4e769481b24255c4d5c25391ec 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Y9z 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 48e43b4e769481b24255c4d5c25391ec 1 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 48e43b4e769481b24255c4d5c25391ec 1 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=48e43b4e769481b24255c4d5c25391ec 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Y9z 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Y9z 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Y9z 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b0c597f2cb43760447f504dc8354f110b7837a51d08482ccdefc056742af75e9 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.t83 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b0c597f2cb43760447f504dc8354f110b7837a51d08482ccdefc056742af75e9 3 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b0c597f2cb43760447f504dc8354f110b7837a51d08482ccdefc056742af75e9 3 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b0c597f2cb43760447f504dc8354f110b7837a51d08482ccdefc056742af75e9 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.t83 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.t83 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.t83 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3859145 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3859145 ']' 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.925 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.926 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.926 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.926 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.184 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.184 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:22.184 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3859170 /var/tmp/host.sock 00:12:22.184 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3859170 ']' 00:12:22.184 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:22.184 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.184 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:22.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:22.184 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.184 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8hS 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.8hS 00:12:22.443 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.8hS 00:12:22.702 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.9sv ]] 00:12:22.702 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9sv 00:12:22.702 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.702 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.702 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.702 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9sv 00:12:22.702 21:09:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9sv 00:12:22.702 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:22.702 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0op 00:12:22.702 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.702 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.702 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.702 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0op 00:12:22.702 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0op 00:12:22.960 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.0Jc ]] 00:12:22.960 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Jc 00:12:22.960 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.960 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.960 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.960 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Jc 00:12:22.960 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Jc 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.hgy 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.hgy 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.hgy 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Y9z ]] 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Y9z 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Y9z 00:12:23.218 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Y9z 00:12:23.476 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:12:23.476 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.t83 00:12:23.476 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.477 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.477 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.477 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.t83 00:12:23.477 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.t83 00:12:23.735 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:12:23.735 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:23.735 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:23.735 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:23.735 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:23.735 21:09:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.735 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.993 00:12:23.993 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.993 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.993 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:24.250 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.250 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:24.250 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.250 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.250 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.250 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:24.250 { 00:12:24.250 "cntlid": 1, 00:12:24.250 "qid": 0, 00:12:24.250 "state": "enabled", 00:12:24.250 "thread": "nvmf_tgt_poll_group_000", 00:12:24.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:24.250 "listen_address": { 00:12:24.250 "trtype": "RDMA", 00:12:24.250 "adrfam": "IPv4", 00:12:24.250 "traddr": "192.168.100.8", 00:12:24.250 "trsvcid": "4420" 00:12:24.250 }, 00:12:24.250 "peer_address": { 00:12:24.250 "trtype": "RDMA", 00:12:24.250 "adrfam": "IPv4", 00:12:24.250 "traddr": "192.168.100.8", 00:12:24.250 "trsvcid": "39385" 00:12:24.250 }, 00:12:24.250 "auth": { 00:12:24.250 "state": "completed", 00:12:24.250 "digest": "sha256", 00:12:24.250 "dhgroup": "null" 00:12:24.250 } 00:12:24.250 } 00:12:24.250 ]' 00:12:24.250 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:24.250 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:24.250 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:24.250 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:24.250 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:24.508 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:24.508 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.508 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.508 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:24.508 21:09:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:25.076 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:25.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.335 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.595 00:12:25.595 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.595 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.595 21:09:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.854 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.854 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.854 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.854 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.854 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.854 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.854 { 00:12:25.854 "cntlid": 3, 00:12:25.854 "qid": 0, 00:12:25.854 "state": "enabled", 00:12:25.854 "thread": "nvmf_tgt_poll_group_000", 00:12:25.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:25.854 "listen_address": { 00:12:25.854 "trtype": "RDMA", 00:12:25.854 "adrfam": "IPv4", 00:12:25.854 "traddr": "192.168.100.8", 00:12:25.854 "trsvcid": "4420" 00:12:25.854 }, 00:12:25.854 "peer_address": { 00:12:25.854 "trtype": "RDMA", 00:12:25.854 "adrfam": "IPv4", 00:12:25.854 "traddr": "192.168.100.8", 00:12:25.854 "trsvcid": "54757" 00:12:25.854 }, 00:12:25.854 "auth": { 00:12:25.854 "state": "completed", 00:12:25.854 "digest": "sha256", 00:12:25.854 "dhgroup": "null" 00:12:25.854 } 00:12:25.854 } 00:12:25.854 ]' 00:12:25.854 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.854 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:25.854 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.854 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:25.854 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:26.114 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:26.114 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:26.114 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.114 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:12:26.114 21:09:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:12:26.682 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.940 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.199 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.199 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.200 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.200 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.200 00:12:27.200 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.200 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.200 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.459 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.459 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.459 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.460 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.460 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.460 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.460 { 00:12:27.460 "cntlid": 5, 00:12:27.460 "qid": 0, 00:12:27.460 "state": "enabled", 00:12:27.460 "thread": "nvmf_tgt_poll_group_000", 00:12:27.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:27.460 "listen_address": { 00:12:27.460 "trtype": "RDMA", 00:12:27.460 "adrfam": "IPv4", 00:12:27.460 "traddr": "192.168.100.8", 00:12:27.460 "trsvcid": "4420" 00:12:27.460 }, 00:12:27.460 "peer_address": { 00:12:27.460 "trtype": "RDMA", 00:12:27.460 "adrfam": "IPv4", 00:12:27.460 "traddr": "192.168.100.8", 00:12:27.460 "trsvcid": "50017" 00:12:27.460 }, 00:12:27.460 "auth": { 00:12:27.460 "state": "completed", 00:12:27.460 "digest": "sha256", 00:12:27.460 "dhgroup": "null" 00:12:27.460 } 00:12:27.460 } 00:12:27.460 ]' 00:12:27.460 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.460 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:27.460 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.460 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:27.460 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.719 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.719 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.719 21:09:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.719 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:12:27.719 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:12:28.287 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.545 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:28.545 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.545 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.545 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.545 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.546 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:28.546 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.804 21:09:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:28.804 00:12:28.804 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.804 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.804 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.062 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.062 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.062 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.062 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.062 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.062 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.062 { 00:12:29.062 "cntlid": 7, 00:12:29.062 "qid": 0, 00:12:29.063 "state": "enabled", 00:12:29.063 "thread": "nvmf_tgt_poll_group_000", 00:12:29.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:29.063 "listen_address": { 00:12:29.063 "trtype": "RDMA", 00:12:29.063 "adrfam": "IPv4", 00:12:29.063 "traddr": "192.168.100.8", 00:12:29.063 "trsvcid": "4420" 00:12:29.063 }, 00:12:29.063 "peer_address": { 00:12:29.063 "trtype": "RDMA", 00:12:29.063 "adrfam": "IPv4", 00:12:29.063 "traddr": "192.168.100.8", 00:12:29.063 "trsvcid": "59394" 00:12:29.063 }, 00:12:29.063 "auth": { 00:12:29.063 "state": "completed", 00:12:29.063 "digest": "sha256", 00:12:29.063 "dhgroup": "null" 00:12:29.063 } 00:12:29.063 } 00:12:29.063 ]' 00:12:29.063 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.063 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:29.063 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.063 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:29.063 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.321 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.321 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.321 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.321 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:12:29.321 21:09:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:12:29.888 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.145 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:30.145 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.145 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.145 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.145 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:30.145 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.145 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:30.145 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:30.404 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:12:30.404 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.404 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:30.404 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:30.404 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:30.404 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.404 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.405 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.405 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.405 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.405 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.405 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.405 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:30.405 00:12:30.663 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:30.663 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:30.663 21:09:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.663 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.663 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:30.663 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.663 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.663 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:30.663 { 00:12:30.663 "cntlid": 9, 00:12:30.663 "qid": 0, 00:12:30.663 "state": "enabled", 00:12:30.663 "thread": "nvmf_tgt_poll_group_000", 00:12:30.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:30.663 "listen_address": { 00:12:30.663 "trtype": "RDMA", 00:12:30.663 "adrfam": "IPv4", 00:12:30.663 "traddr": "192.168.100.8", 00:12:30.663 "trsvcid": "4420" 00:12:30.663 }, 00:12:30.663 "peer_address": { 00:12:30.663 "trtype": "RDMA", 00:12:30.663 "adrfam": "IPv4", 00:12:30.663 "traddr": "192.168.100.8", 00:12:30.663 "trsvcid": "58922" 00:12:30.663 }, 00:12:30.663 "auth": { 00:12:30.663 "state": "completed", 00:12:30.663 "digest": "sha256", 00:12:30.663 "dhgroup": "ffdhe2048" 00:12:30.663 } 00:12:30.663 } 00:12:30.663 ]' 00:12:30.663 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:30.663 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:30.663 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:30.922 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:30.922 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:30.922 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:30.922 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:30.922 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.922 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:30.922 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:31.489 21:09:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.748 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.748 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:31.748 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.748 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.748 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.748 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.748 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:31.748 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:32.005 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:12:32.005 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.005 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:32.005 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:32.005 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:32.005 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.005 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.006 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.006 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.006 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.006 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.006 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.006 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:32.263 00:12:32.264 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.264 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.264 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.264 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.264 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.264 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.264 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.264 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.264 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.264 { 00:12:32.264 "cntlid": 11, 00:12:32.264 "qid": 0, 00:12:32.264 "state": "enabled", 00:12:32.264 "thread": "nvmf_tgt_poll_group_000", 00:12:32.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:32.264 "listen_address": { 00:12:32.264 "trtype": "RDMA", 00:12:32.264 "adrfam": "IPv4", 00:12:32.264 "traddr": "192.168.100.8", 00:12:32.264 "trsvcid": "4420" 00:12:32.264 }, 00:12:32.264 "peer_address": { 00:12:32.264 "trtype": "RDMA", 00:12:32.264 "adrfam": "IPv4", 00:12:32.264 "traddr": "192.168.100.8", 00:12:32.264 "trsvcid": "57506" 00:12:32.264 }, 00:12:32.264 "auth": { 00:12:32.264 "state": "completed", 00:12:32.264 "digest": "sha256", 00:12:32.264 "dhgroup": "ffdhe2048" 00:12:32.264 } 00:12:32.264 } 00:12:32.264 ]' 00:12:32.264 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.523 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:32.523 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.523 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:32.523 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.523 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.523 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.523 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.782 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:12:32.782 21:09:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:12:33.348 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.348 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:33.348 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.348 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.348 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.348 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.348 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:33.348 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.608 21:09:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.868 00:12:33.868 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.868 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.868 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.868 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.868 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.868 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.868 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.128 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.128 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.128 { 00:12:34.128 "cntlid": 13, 00:12:34.128 "qid": 0, 00:12:34.128 "state": "enabled", 00:12:34.128 "thread": "nvmf_tgt_poll_group_000", 00:12:34.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:34.128 "listen_address": { 00:12:34.128 "trtype": "RDMA", 00:12:34.128 "adrfam": "IPv4", 00:12:34.128 "traddr": "192.168.100.8", 00:12:34.128 "trsvcid": "4420" 00:12:34.128 }, 00:12:34.128 "peer_address": { 00:12:34.128 "trtype": "RDMA", 00:12:34.128 "adrfam": "IPv4", 00:12:34.128 "traddr": "192.168.100.8", 00:12:34.128 "trsvcid": "45702" 00:12:34.128 }, 00:12:34.128 "auth": { 00:12:34.128 "state": "completed", 00:12:34.128 "digest": "sha256", 00:12:34.128 "dhgroup": "ffdhe2048" 00:12:34.128 } 00:12:34.128 } 00:12:34.128 ]' 00:12:34.128 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.128 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:34.128 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.128 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:34.128 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.128 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.128 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.128 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.387 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:12:34.387 21:09:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:12:34.955 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.955 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:34.955 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.955 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.955 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.955 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.955 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:34.955 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.215 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.475 00:12:35.475 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.475 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.475 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.475 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.475 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.475 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.475 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.475 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.735 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.735 { 00:12:35.735 "cntlid": 15, 00:12:35.735 "qid": 0, 00:12:35.735 "state": "enabled", 00:12:35.735 "thread": "nvmf_tgt_poll_group_000", 00:12:35.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:35.735 "listen_address": { 00:12:35.735 "trtype": "RDMA", 00:12:35.735 "adrfam": "IPv4", 00:12:35.735 "traddr": "192.168.100.8", 00:12:35.735 "trsvcid": "4420" 00:12:35.735 }, 00:12:35.735 "peer_address": { 00:12:35.735 "trtype": "RDMA", 00:12:35.735 "adrfam": "IPv4", 00:12:35.735 "traddr": "192.168.100.8", 00:12:35.735 "trsvcid": "38742" 00:12:35.735 }, 00:12:35.735 "auth": { 00:12:35.735 "state": "completed", 00:12:35.735 "digest": "sha256", 00:12:35.735 "dhgroup": "ffdhe2048" 00:12:35.735 } 00:12:35.735 } 00:12:35.735 ]' 00:12:35.735 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.735 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:35.735 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.735 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:35.735 21:09:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.735 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.735 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.735 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.994 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:12:35.994 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:12:36.562 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.562 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:36.562 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.562 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.562 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.562 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:36.562 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.562 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:36.562 21:09:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.822 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.081 00:12:37.081 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.081 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.081 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.081 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.081 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.081 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.081 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.081 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.081 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.081 { 00:12:37.081 "cntlid": 17, 00:12:37.081 "qid": 0, 00:12:37.081 "state": "enabled", 00:12:37.081 "thread": "nvmf_tgt_poll_group_000", 00:12:37.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:37.081 "listen_address": { 00:12:37.081 "trtype": "RDMA", 00:12:37.081 "adrfam": "IPv4", 00:12:37.081 "traddr": "192.168.100.8", 00:12:37.081 "trsvcid": "4420" 00:12:37.081 }, 00:12:37.081 "peer_address": { 00:12:37.081 "trtype": "RDMA", 00:12:37.081 "adrfam": "IPv4", 00:12:37.081 "traddr": "192.168.100.8", 00:12:37.081 "trsvcid": "59849" 00:12:37.081 }, 00:12:37.081 "auth": { 00:12:37.081 "state": "completed", 00:12:37.081 "digest": "sha256", 00:12:37.081 "dhgroup": "ffdhe3072" 00:12:37.081 } 00:12:37.081 } 00:12:37.081 ]' 00:12:37.340 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.340 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:37.340 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.340 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:37.340 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.340 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.340 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.340 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.600 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:37.600 21:09:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:38.169 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.169 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:38.169 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.169 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.169 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.169 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.169 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:38.169 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.429 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:38.688 00:12:38.688 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.688 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.688 21:09:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.688 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.688 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.688 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.688 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.948 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.948 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.948 { 00:12:38.948 "cntlid": 19, 00:12:38.948 "qid": 0, 00:12:38.948 "state": "enabled", 00:12:38.948 "thread": "nvmf_tgt_poll_group_000", 00:12:38.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:38.948 "listen_address": { 00:12:38.948 "trtype": "RDMA", 00:12:38.948 "adrfam": "IPv4", 00:12:38.948 "traddr": "192.168.100.8", 00:12:38.948 "trsvcid": "4420" 00:12:38.948 }, 00:12:38.948 "peer_address": { 00:12:38.948 "trtype": "RDMA", 00:12:38.948 "adrfam": "IPv4", 00:12:38.948 "traddr": "192.168.100.8", 00:12:38.948 "trsvcid": "56769" 00:12:38.948 }, 00:12:38.948 "auth": { 00:12:38.948 "state": "completed", 00:12:38.948 "digest": "sha256", 00:12:38.948 "dhgroup": "ffdhe3072" 00:12:38.948 } 00:12:38.948 } 00:12:38.948 ]' 00:12:38.948 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.948 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:38.948 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.948 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:38.948 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.948 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.948 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.948 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.207 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:12:39.207 21:09:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:12:39.775 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.775 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:39.775 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.775 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.775 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.775 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.775 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:39.775 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.034 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.294 00:12:40.294 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.294 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.294 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.553 { 00:12:40.553 "cntlid": 21, 00:12:40.553 "qid": 0, 00:12:40.553 "state": "enabled", 00:12:40.553 "thread": "nvmf_tgt_poll_group_000", 00:12:40.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:40.553 "listen_address": { 00:12:40.553 "trtype": "RDMA", 00:12:40.553 "adrfam": "IPv4", 00:12:40.553 "traddr": "192.168.100.8", 00:12:40.553 "trsvcid": "4420" 00:12:40.553 }, 00:12:40.553 "peer_address": { 00:12:40.553 "trtype": "RDMA", 00:12:40.553 "adrfam": "IPv4", 00:12:40.553 "traddr": "192.168.100.8", 00:12:40.553 "trsvcid": "55305" 00:12:40.553 }, 00:12:40.553 "auth": { 00:12:40.553 "state": "completed", 00:12:40.553 "digest": "sha256", 00:12:40.553 "dhgroup": "ffdhe3072" 00:12:40.553 } 00:12:40.553 } 00:12:40.553 ]' 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.553 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.554 21:09:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.813 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:12:40.813 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:12:41.382 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.382 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:41.382 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.382 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.382 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.382 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.382 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:41.382 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.641 21:09:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:41.898 00:12:41.898 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.898 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.898 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.157 { 00:12:42.157 "cntlid": 23, 00:12:42.157 "qid": 0, 00:12:42.157 "state": "enabled", 00:12:42.157 "thread": "nvmf_tgt_poll_group_000", 00:12:42.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:42.157 "listen_address": { 00:12:42.157 "trtype": "RDMA", 00:12:42.157 "adrfam": "IPv4", 00:12:42.157 "traddr": "192.168.100.8", 00:12:42.157 "trsvcid": "4420" 00:12:42.157 }, 00:12:42.157 "peer_address": { 00:12:42.157 "trtype": "RDMA", 00:12:42.157 "adrfam": "IPv4", 00:12:42.157 "traddr": "192.168.100.8", 00:12:42.157 "trsvcid": "34281" 00:12:42.157 }, 00:12:42.157 "auth": { 00:12:42.157 "state": "completed", 00:12:42.157 "digest": "sha256", 00:12:42.157 "dhgroup": "ffdhe3072" 00:12:42.157 } 00:12:42.157 } 00:12:42.157 ]' 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:42.157 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.415 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:12:42.415 21:09:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:12:42.982 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:43.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.241 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:43.500 00:12:43.759 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.759 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.759 21:09:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.759 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.759 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.759 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.759 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.759 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.759 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.759 { 00:12:43.759 "cntlid": 25, 00:12:43.759 "qid": 0, 00:12:43.759 "state": "enabled", 00:12:43.759 "thread": "nvmf_tgt_poll_group_000", 00:12:43.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:43.759 "listen_address": { 00:12:43.759 "trtype": "RDMA", 00:12:43.759 "adrfam": "IPv4", 00:12:43.759 "traddr": "192.168.100.8", 00:12:43.759 "trsvcid": "4420" 00:12:43.759 }, 00:12:43.759 "peer_address": { 00:12:43.759 "trtype": "RDMA", 00:12:43.759 "adrfam": "IPv4", 00:12:43.759 "traddr": "192.168.100.8", 00:12:43.759 "trsvcid": "55562" 00:12:43.759 }, 00:12:43.759 "auth": { 00:12:43.759 "state": "completed", 00:12:43.759 "digest": "sha256", 00:12:43.759 "dhgroup": "ffdhe4096" 00:12:43.759 } 00:12:43.759 } 00:12:43.759 ]' 00:12:43.759 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.759 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:43.759 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:44.017 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:44.017 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:44.017 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:44.017 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:44.017 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.017 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:44.017 21:09:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:44.583 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.842 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:44.842 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.842 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.842 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.842 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.842 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:44.842 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.102 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:45.388 00:12:45.388 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.388 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.388 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.388 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.388 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.388 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.388 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.388 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.388 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.388 { 00:12:45.388 "cntlid": 27, 00:12:45.388 "qid": 0, 00:12:45.388 "state": "enabled", 00:12:45.388 "thread": "nvmf_tgt_poll_group_000", 00:12:45.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:45.388 "listen_address": { 00:12:45.388 "trtype": "RDMA", 00:12:45.388 "adrfam": "IPv4", 00:12:45.388 "traddr": "192.168.100.8", 00:12:45.388 "trsvcid": "4420" 00:12:45.388 }, 00:12:45.388 "peer_address": { 00:12:45.388 "trtype": "RDMA", 00:12:45.388 "adrfam": "IPv4", 00:12:45.388 "traddr": "192.168.100.8", 00:12:45.388 "trsvcid": "52910" 00:12:45.388 }, 00:12:45.388 "auth": { 00:12:45.388 "state": "completed", 00:12:45.388 "digest": "sha256", 00:12:45.388 "dhgroup": "ffdhe4096" 00:12:45.388 } 00:12:45.388 } 00:12:45.388 ]' 00:12:45.388 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.712 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:45.712 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.712 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:45.712 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.712 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.712 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.712 21:09:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:45.712 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:12:45.712 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:12:46.345 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.604 21:09:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.604 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.604 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.604 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.604 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:46.864 00:12:46.864 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:46.864 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:46.864 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.122 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.122 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.122 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.122 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.122 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.122 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.122 { 00:12:47.122 "cntlid": 29, 00:12:47.122 "qid": 0, 00:12:47.122 "state": "enabled", 00:12:47.122 "thread": "nvmf_tgt_poll_group_000", 00:12:47.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:47.122 "listen_address": { 00:12:47.122 "trtype": "RDMA", 00:12:47.122 "adrfam": "IPv4", 00:12:47.122 "traddr": "192.168.100.8", 00:12:47.122 "trsvcid": "4420" 00:12:47.122 }, 00:12:47.122 "peer_address": { 00:12:47.122 "trtype": "RDMA", 00:12:47.122 "adrfam": "IPv4", 00:12:47.122 "traddr": "192.168.100.8", 00:12:47.122 "trsvcid": "38334" 00:12:47.122 }, 00:12:47.122 "auth": { 00:12:47.122 "state": "completed", 00:12:47.122 "digest": "sha256", 00:12:47.122 "dhgroup": "ffdhe4096" 00:12:47.122 } 00:12:47.122 } 00:12:47.122 ]' 00:12:47.122 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.122 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:47.122 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.122 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:47.122 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.380 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.381 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.381 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:47.381 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:12:47.381 21:09:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:12:47.948 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.207 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:48.207 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.207 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.207 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.207 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.207 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:48.207 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.467 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:48.467 00:12:48.726 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:48.726 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:48.726 21:09:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.726 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.726 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.726 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.726 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.726 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.726 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.726 { 00:12:48.726 "cntlid": 31, 00:12:48.726 "qid": 0, 00:12:48.726 "state": "enabled", 00:12:48.726 "thread": "nvmf_tgt_poll_group_000", 00:12:48.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:48.726 "listen_address": { 00:12:48.726 "trtype": "RDMA", 00:12:48.726 "adrfam": "IPv4", 00:12:48.726 "traddr": "192.168.100.8", 00:12:48.726 "trsvcid": "4420" 00:12:48.726 }, 00:12:48.726 "peer_address": { 00:12:48.726 "trtype": "RDMA", 00:12:48.726 "adrfam": "IPv4", 00:12:48.726 "traddr": "192.168.100.8", 00:12:48.726 "trsvcid": "50472" 00:12:48.726 }, 00:12:48.726 "auth": { 00:12:48.726 "state": "completed", 00:12:48.726 "digest": "sha256", 00:12:48.726 "dhgroup": "ffdhe4096" 00:12:48.726 } 00:12:48.726 } 00:12:48.726 ]' 00:12:48.726 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.726 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:48.726 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.985 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:48.985 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.985 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.985 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.985 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.985 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:12:48.985 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:12:49.554 21:09:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.813 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:49.813 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.813 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.813 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.813 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:49.813 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.813 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:49.813 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.072 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:50.332 00:12:50.332 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.332 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.332 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.591 { 00:12:50.591 "cntlid": 33, 00:12:50.591 "qid": 0, 00:12:50.591 "state": "enabled", 00:12:50.591 "thread": "nvmf_tgt_poll_group_000", 00:12:50.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:50.591 "listen_address": { 00:12:50.591 "trtype": "RDMA", 00:12:50.591 "adrfam": "IPv4", 00:12:50.591 "traddr": "192.168.100.8", 00:12:50.591 "trsvcid": "4420" 00:12:50.591 }, 00:12:50.591 "peer_address": { 00:12:50.591 "trtype": "RDMA", 00:12:50.591 "adrfam": "IPv4", 00:12:50.591 "traddr": "192.168.100.8", 00:12:50.591 "trsvcid": "59275" 00:12:50.591 }, 00:12:50.591 "auth": { 00:12:50.591 "state": "completed", 00:12:50.591 "digest": "sha256", 00:12:50.591 "dhgroup": "ffdhe6144" 00:12:50.591 } 00:12:50.591 } 00:12:50.591 ]' 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.591 21:09:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.850 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:50.850 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:51.418 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.418 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:51.418 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.418 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.418 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.418 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.418 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:51.418 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.678 21:09:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.936 00:12:51.936 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.936 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.936 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.193 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.193 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.193 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.193 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.193 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.193 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.193 { 00:12:52.193 "cntlid": 35, 00:12:52.193 "qid": 0, 00:12:52.193 "state": "enabled", 00:12:52.193 "thread": "nvmf_tgt_poll_group_000", 00:12:52.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:52.193 "listen_address": { 00:12:52.193 "trtype": "RDMA", 00:12:52.193 "adrfam": "IPv4", 00:12:52.193 "traddr": "192.168.100.8", 00:12:52.193 "trsvcid": "4420" 00:12:52.193 }, 00:12:52.193 "peer_address": { 00:12:52.193 "trtype": "RDMA", 00:12:52.193 "adrfam": "IPv4", 00:12:52.193 "traddr": "192.168.100.8", 00:12:52.193 "trsvcid": "35405" 00:12:52.193 }, 00:12:52.193 "auth": { 00:12:52.193 "state": "completed", 00:12:52.193 "digest": "sha256", 00:12:52.193 "dhgroup": "ffdhe6144" 00:12:52.193 } 00:12:52.193 } 00:12:52.193 ]' 00:12:52.193 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.193 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:52.194 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.194 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:52.194 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.194 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.194 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.194 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.452 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:12:52.452 21:09:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:12:53.018 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.277 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.277 21:09:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.845 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.845 { 00:12:53.845 "cntlid": 37, 00:12:53.845 "qid": 0, 00:12:53.845 "state": "enabled", 00:12:53.845 "thread": "nvmf_tgt_poll_group_000", 00:12:53.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:53.845 "listen_address": { 00:12:53.845 "trtype": "RDMA", 00:12:53.845 "adrfam": "IPv4", 00:12:53.845 "traddr": "192.168.100.8", 00:12:53.845 "trsvcid": "4420" 00:12:53.845 }, 00:12:53.845 "peer_address": { 00:12:53.845 "trtype": "RDMA", 00:12:53.845 "adrfam": "IPv4", 00:12:53.845 "traddr": "192.168.100.8", 00:12:53.845 "trsvcid": "39778" 00:12:53.845 }, 00:12:53.845 "auth": { 00:12:53.845 "state": "completed", 00:12:53.845 "digest": "sha256", 00:12:53.845 "dhgroup": "ffdhe6144" 00:12:53.845 } 00:12:53.845 } 00:12:53.845 ]' 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:53.845 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.103 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.103 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.103 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.103 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:12:54.103 21:09:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:12:54.671 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.930 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:54.930 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.930 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.930 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.930 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.930 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:54.930 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:12:54.930 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:12:54.931 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.931 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:54.931 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:54.931 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.931 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.931 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:12:54.931 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.931 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.189 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.189 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:55.189 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.189 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.447 00:12:55.447 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.447 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.447 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.447 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.447 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.447 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.447 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.706 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.706 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.706 { 00:12:55.706 "cntlid": 39, 00:12:55.706 "qid": 0, 00:12:55.706 "state": "enabled", 00:12:55.706 "thread": "nvmf_tgt_poll_group_000", 00:12:55.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:55.706 "listen_address": { 00:12:55.706 "trtype": "RDMA", 00:12:55.706 "adrfam": "IPv4", 00:12:55.706 "traddr": "192.168.100.8", 00:12:55.706 "trsvcid": "4420" 00:12:55.706 }, 00:12:55.706 "peer_address": { 00:12:55.706 "trtype": "RDMA", 00:12:55.706 "adrfam": "IPv4", 00:12:55.706 "traddr": "192.168.100.8", 00:12:55.706 "trsvcid": "34920" 00:12:55.706 }, 00:12:55.706 "auth": { 00:12:55.706 "state": "completed", 00:12:55.706 "digest": "sha256", 00:12:55.706 "dhgroup": "ffdhe6144" 00:12:55.706 } 00:12:55.706 } 00:12:55.706 ]' 00:12:55.706 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.706 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:55.706 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.706 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:55.706 21:09:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.706 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.706 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.706 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.964 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:12:55.964 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:12:56.532 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.532 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:56.532 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.532 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.532 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.532 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.532 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.532 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:56.532 21:09:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.790 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.357 00:12:57.357 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.357 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.357 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.357 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.357 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.358 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.358 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.358 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.358 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.358 { 00:12:57.358 "cntlid": 41, 00:12:57.358 "qid": 0, 00:12:57.358 "state": "enabled", 00:12:57.358 "thread": "nvmf_tgt_poll_group_000", 00:12:57.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:57.358 "listen_address": { 00:12:57.358 "trtype": "RDMA", 00:12:57.358 "adrfam": "IPv4", 00:12:57.358 "traddr": "192.168.100.8", 00:12:57.358 "trsvcid": "4420" 00:12:57.358 }, 00:12:57.358 "peer_address": { 00:12:57.358 "trtype": "RDMA", 00:12:57.358 "adrfam": "IPv4", 00:12:57.358 "traddr": "192.168.100.8", 00:12:57.358 "trsvcid": "39628" 00:12:57.358 }, 00:12:57.358 "auth": { 00:12:57.358 "state": "completed", 00:12:57.358 "digest": "sha256", 00:12:57.358 "dhgroup": "ffdhe8192" 00:12:57.358 } 00:12:57.358 } 00:12:57.358 ]' 00:12:57.358 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.358 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:57.358 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.617 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:57.617 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.617 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.617 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.617 21:09:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.617 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:57.617 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:12:58.185 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.443 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:12:58.443 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.444 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.444 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.444 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.444 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:58.444 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.703 21:09:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.961 00:12:58.962 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.962 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.962 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.220 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.220 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.220 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.221 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.221 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.221 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.221 { 00:12:59.221 "cntlid": 43, 00:12:59.221 "qid": 0, 00:12:59.221 "state": "enabled", 00:12:59.221 "thread": "nvmf_tgt_poll_group_000", 00:12:59.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:12:59.221 "listen_address": { 00:12:59.221 "trtype": "RDMA", 00:12:59.221 "adrfam": "IPv4", 00:12:59.221 "traddr": "192.168.100.8", 00:12:59.221 "trsvcid": "4420" 00:12:59.221 }, 00:12:59.221 "peer_address": { 00:12:59.221 "trtype": "RDMA", 00:12:59.221 "adrfam": "IPv4", 00:12:59.221 "traddr": "192.168.100.8", 00:12:59.221 "trsvcid": "58278" 00:12:59.221 }, 00:12:59.221 "auth": { 00:12:59.221 "state": "completed", 00:12:59.221 "digest": "sha256", 00:12:59.221 "dhgroup": "ffdhe8192" 00:12:59.221 } 00:12:59.221 } 00:12:59.221 ]' 00:12:59.221 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.221 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:59.221 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.221 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:59.221 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.479 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.480 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.480 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.480 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:12:59.480 21:09:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:00.046 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.304 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:00.304 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.304 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.304 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.304 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.304 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:00.304 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.562 21:09:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:00.820 00:13:00.820 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.820 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.820 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.078 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.078 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.078 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.078 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.078 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.078 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.078 { 00:13:01.078 "cntlid": 45, 00:13:01.078 "qid": 0, 00:13:01.078 "state": "enabled", 00:13:01.078 "thread": "nvmf_tgt_poll_group_000", 00:13:01.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:01.078 "listen_address": { 00:13:01.078 "trtype": "RDMA", 00:13:01.078 "adrfam": "IPv4", 00:13:01.078 "traddr": "192.168.100.8", 00:13:01.078 "trsvcid": "4420" 00:13:01.078 }, 00:13:01.078 "peer_address": { 00:13:01.078 "trtype": "RDMA", 00:13:01.078 "adrfam": "IPv4", 00:13:01.078 "traddr": "192.168.100.8", 00:13:01.078 "trsvcid": "42017" 00:13:01.078 }, 00:13:01.078 "auth": { 00:13:01.078 "state": "completed", 00:13:01.078 "digest": "sha256", 00:13:01.078 "dhgroup": "ffdhe8192" 00:13:01.078 } 00:13:01.078 } 00:13:01.078 ]' 00:13:01.078 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.078 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:01.078 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.078 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:01.078 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.338 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.338 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.338 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.338 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:01.338 21:09:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:01.907 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.167 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:02.167 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.167 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.167 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.167 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.167 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:02.167 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.427 21:09:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:02.687 00:13:02.687 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.687 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.687 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.947 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.947 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.947 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.947 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.947 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.947 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.947 { 00:13:02.947 "cntlid": 47, 00:13:02.947 "qid": 0, 00:13:02.947 "state": "enabled", 00:13:02.947 "thread": "nvmf_tgt_poll_group_000", 00:13:02.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:02.947 "listen_address": { 00:13:02.947 "trtype": "RDMA", 00:13:02.947 "adrfam": "IPv4", 00:13:02.947 "traddr": "192.168.100.8", 00:13:02.947 "trsvcid": "4420" 00:13:02.947 }, 00:13:02.947 "peer_address": { 00:13:02.947 "trtype": "RDMA", 00:13:02.947 "adrfam": "IPv4", 00:13:02.947 "traddr": "192.168.100.8", 00:13:02.947 "trsvcid": "54284" 00:13:02.947 }, 00:13:02.947 "auth": { 00:13:02.947 "state": "completed", 00:13:02.947 "digest": "sha256", 00:13:02.947 "dhgroup": "ffdhe8192" 00:13:02.947 } 00:13:02.947 } 00:13:02.947 ]' 00:13:02.947 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.947 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:02.947 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.947 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:02.947 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.206 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.206 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.206 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.206 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:03.206 21:09:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:03.774 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.034 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:04.034 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.034 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.034 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.034 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:04.034 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:04.034 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.034 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:04.034 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.293 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.293 00:13:04.551 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.551 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.551 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.551 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.551 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.551 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.551 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.551 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.551 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.551 { 00:13:04.551 "cntlid": 49, 00:13:04.551 "qid": 0, 00:13:04.551 "state": "enabled", 00:13:04.552 "thread": "nvmf_tgt_poll_group_000", 00:13:04.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:04.552 "listen_address": { 00:13:04.552 "trtype": "RDMA", 00:13:04.552 "adrfam": "IPv4", 00:13:04.552 "traddr": "192.168.100.8", 00:13:04.552 "trsvcid": "4420" 00:13:04.552 }, 00:13:04.552 "peer_address": { 00:13:04.552 "trtype": "RDMA", 00:13:04.552 "adrfam": "IPv4", 00:13:04.552 "traddr": "192.168.100.8", 00:13:04.552 "trsvcid": "47724" 00:13:04.552 }, 00:13:04.552 "auth": { 00:13:04.552 "state": "completed", 00:13:04.552 "digest": "sha384", 00:13:04.552 "dhgroup": "null" 00:13:04.552 } 00:13:04.552 } 00:13:04.552 ]' 00:13:04.552 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.552 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:04.552 21:09:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.810 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:04.810 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.810 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.810 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.810 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.810 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:04.810 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:05.746 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.746 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:05.746 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.746 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.746 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.746 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.746 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:05.746 21:09:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:05.746 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:06.005 00:13:06.005 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.005 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.005 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.264 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.264 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.264 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.264 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.264 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.264 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.264 { 00:13:06.264 "cntlid": 51, 00:13:06.264 "qid": 0, 00:13:06.264 "state": "enabled", 00:13:06.264 "thread": "nvmf_tgt_poll_group_000", 00:13:06.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:06.264 "listen_address": { 00:13:06.264 "trtype": "RDMA", 00:13:06.264 "adrfam": "IPv4", 00:13:06.264 "traddr": "192.168.100.8", 00:13:06.264 "trsvcid": "4420" 00:13:06.264 }, 00:13:06.264 "peer_address": { 00:13:06.264 "trtype": "RDMA", 00:13:06.264 "adrfam": "IPv4", 00:13:06.264 "traddr": "192.168.100.8", 00:13:06.264 "trsvcid": "58915" 00:13:06.264 }, 00:13:06.264 "auth": { 00:13:06.264 "state": "completed", 00:13:06.264 "digest": "sha384", 00:13:06.264 "dhgroup": "null" 00:13:06.264 } 00:13:06.264 } 00:13:06.264 ]' 00:13:06.264 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.264 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:06.264 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.264 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:06.264 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.523 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.523 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.523 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.523 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:06.523 21:09:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:07.091 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.351 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.610 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.610 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.610 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.610 21:09:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:07.610 00:13:07.610 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.610 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.610 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.871 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.871 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.871 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.871 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.871 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.871 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.871 { 00:13:07.871 "cntlid": 53, 00:13:07.871 "qid": 0, 00:13:07.871 "state": "enabled", 00:13:07.871 "thread": "nvmf_tgt_poll_group_000", 00:13:07.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:07.871 "listen_address": { 00:13:07.871 "trtype": "RDMA", 00:13:07.871 "adrfam": "IPv4", 00:13:07.871 "traddr": "192.168.100.8", 00:13:07.871 "trsvcid": "4420" 00:13:07.871 }, 00:13:07.871 "peer_address": { 00:13:07.871 "trtype": "RDMA", 00:13:07.871 "adrfam": "IPv4", 00:13:07.871 "traddr": "192.168.100.8", 00:13:07.871 "trsvcid": "44799" 00:13:07.871 }, 00:13:07.871 "auth": { 00:13:07.871 "state": "completed", 00:13:07.871 "digest": "sha384", 00:13:07.871 "dhgroup": "null" 00:13:07.871 } 00:13:07.871 } 00:13:07.871 ]' 00:13:07.871 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.871 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:07.871 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.871 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:07.871 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.130 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.130 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.130 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.130 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:08.131 21:09:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:08.697 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:08.955 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.956 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:08.956 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.956 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.215 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.215 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:09.215 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.215 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:09.215 00:13:09.215 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.215 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.215 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.473 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.473 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.473 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.473 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.473 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.473 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.473 { 00:13:09.473 "cntlid": 55, 00:13:09.473 "qid": 0, 00:13:09.473 "state": "enabled", 00:13:09.473 "thread": "nvmf_tgt_poll_group_000", 00:13:09.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:09.473 "listen_address": { 00:13:09.473 "trtype": "RDMA", 00:13:09.473 "adrfam": "IPv4", 00:13:09.473 "traddr": "192.168.100.8", 00:13:09.473 "trsvcid": "4420" 00:13:09.473 }, 00:13:09.473 "peer_address": { 00:13:09.473 "trtype": "RDMA", 00:13:09.473 "adrfam": "IPv4", 00:13:09.473 "traddr": "192.168.100.8", 00:13:09.473 "trsvcid": "34450" 00:13:09.473 }, 00:13:09.473 "auth": { 00:13:09.473 "state": "completed", 00:13:09.473 "digest": "sha384", 00:13:09.473 "dhgroup": "null" 00:13:09.473 } 00:13:09.473 } 00:13:09.473 ]' 00:13:09.473 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.473 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:09.473 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.473 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:09.473 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.732 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.732 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.732 21:09:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.732 21:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:09.732 21:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:10.302 21:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.562 21:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:10.562 21:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.562 21:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.562 21:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.562 21:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:10.562 21:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.562 21:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:10.562 21:09:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.821 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.821 00:13:11.079 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.079 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.079 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.079 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.079 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.079 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.079 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.079 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.079 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.079 { 00:13:11.079 "cntlid": 57, 00:13:11.079 "qid": 0, 00:13:11.079 "state": "enabled", 00:13:11.079 "thread": "nvmf_tgt_poll_group_000", 00:13:11.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:11.079 "listen_address": { 00:13:11.079 "trtype": "RDMA", 00:13:11.079 "adrfam": "IPv4", 00:13:11.079 "traddr": "192.168.100.8", 00:13:11.079 "trsvcid": "4420" 00:13:11.079 }, 00:13:11.079 "peer_address": { 00:13:11.079 "trtype": "RDMA", 00:13:11.079 "adrfam": "IPv4", 00:13:11.079 "traddr": "192.168.100.8", 00:13:11.079 "trsvcid": "34248" 00:13:11.079 }, 00:13:11.080 "auth": { 00:13:11.080 "state": "completed", 00:13:11.080 "digest": "sha384", 00:13:11.080 "dhgroup": "ffdhe2048" 00:13:11.080 } 00:13:11.080 } 00:13:11.080 ]' 00:13:11.080 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.080 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:11.080 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.338 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:11.338 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.338 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.338 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.338 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.596 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:11.596 21:09:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:12.171 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.171 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:12.171 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.171 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.171 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.171 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.171 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:12.171 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.429 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.688 00:13:12.688 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.688 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.688 21:09:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.688 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.688 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.688 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.688 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.947 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.947 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.947 { 00:13:12.947 "cntlid": 59, 00:13:12.947 "qid": 0, 00:13:12.947 "state": "enabled", 00:13:12.947 "thread": "nvmf_tgt_poll_group_000", 00:13:12.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:12.947 "listen_address": { 00:13:12.947 "trtype": "RDMA", 00:13:12.947 "adrfam": "IPv4", 00:13:12.947 "traddr": "192.168.100.8", 00:13:12.947 "trsvcid": "4420" 00:13:12.947 }, 00:13:12.947 "peer_address": { 00:13:12.947 "trtype": "RDMA", 00:13:12.947 "adrfam": "IPv4", 00:13:12.947 "traddr": "192.168.100.8", 00:13:12.947 "trsvcid": "40630" 00:13:12.947 }, 00:13:12.947 "auth": { 00:13:12.947 "state": "completed", 00:13:12.947 "digest": "sha384", 00:13:12.947 "dhgroup": "ffdhe2048" 00:13:12.947 } 00:13:12.947 } 00:13:12.947 ]' 00:13:12.947 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.947 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:12.947 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.947 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:12.947 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.947 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.947 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.947 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.205 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:13.205 21:10:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:13.771 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.771 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:13.771 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.771 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.771 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.771 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.771 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:13.771 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.029 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.287 00:13:14.288 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.288 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.288 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.546 { 00:13:14.546 "cntlid": 61, 00:13:14.546 "qid": 0, 00:13:14.546 "state": "enabled", 00:13:14.546 "thread": "nvmf_tgt_poll_group_000", 00:13:14.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:14.546 "listen_address": { 00:13:14.546 "trtype": "RDMA", 00:13:14.546 "adrfam": "IPv4", 00:13:14.546 "traddr": "192.168.100.8", 00:13:14.546 "trsvcid": "4420" 00:13:14.546 }, 00:13:14.546 "peer_address": { 00:13:14.546 "trtype": "RDMA", 00:13:14.546 "adrfam": "IPv4", 00:13:14.546 "traddr": "192.168.100.8", 00:13:14.546 "trsvcid": "41452" 00:13:14.546 }, 00:13:14.546 "auth": { 00:13:14.546 "state": "completed", 00:13:14.546 "digest": "sha384", 00:13:14.546 "dhgroup": "ffdhe2048" 00:13:14.546 } 00:13:14.546 } 00:13:14.546 ]' 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.546 21:10:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.805 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:14.805 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:15.370 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.370 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:15.370 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.370 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.370 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.370 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.370 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:15.371 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.629 21:10:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.888 00:13:15.888 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.888 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.888 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.146 { 00:13:16.146 "cntlid": 63, 00:13:16.146 "qid": 0, 00:13:16.146 "state": "enabled", 00:13:16.146 "thread": "nvmf_tgt_poll_group_000", 00:13:16.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:16.146 "listen_address": { 00:13:16.146 "trtype": "RDMA", 00:13:16.146 "adrfam": "IPv4", 00:13:16.146 "traddr": "192.168.100.8", 00:13:16.146 "trsvcid": "4420" 00:13:16.146 }, 00:13:16.146 "peer_address": { 00:13:16.146 "trtype": "RDMA", 00:13:16.146 "adrfam": "IPv4", 00:13:16.146 "traddr": "192.168.100.8", 00:13:16.146 "trsvcid": "49184" 00:13:16.146 }, 00:13:16.146 "auth": { 00:13:16.146 "state": "completed", 00:13:16.146 "digest": "sha384", 00:13:16.146 "dhgroup": "ffdhe2048" 00:13:16.146 } 00:13:16.146 } 00:13:16.146 ]' 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.146 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.405 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:16.405 21:10:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:16.972 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.972 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:16.972 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.972 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.972 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.972 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:16.972 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.972 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:16.972 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.231 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.491 00:13:17.491 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.491 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.491 21:10:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.751 { 00:13:17.751 "cntlid": 65, 00:13:17.751 "qid": 0, 00:13:17.751 "state": "enabled", 00:13:17.751 "thread": "nvmf_tgt_poll_group_000", 00:13:17.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:17.751 "listen_address": { 00:13:17.751 "trtype": "RDMA", 00:13:17.751 "adrfam": "IPv4", 00:13:17.751 "traddr": "192.168.100.8", 00:13:17.751 "trsvcid": "4420" 00:13:17.751 }, 00:13:17.751 "peer_address": { 00:13:17.751 "trtype": "RDMA", 00:13:17.751 "adrfam": "IPv4", 00:13:17.751 "traddr": "192.168.100.8", 00:13:17.751 "trsvcid": "55264" 00:13:17.751 }, 00:13:17.751 "auth": { 00:13:17.751 "state": "completed", 00:13:17.751 "digest": "sha384", 00:13:17.751 "dhgroup": "ffdhe3072" 00:13:17.751 } 00:13:17.751 } 00:13:17.751 ]' 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.751 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.011 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:18.011 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:18.581 21:10:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.841 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:19.101 00:13:19.101 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.101 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.101 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.360 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.360 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.360 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.360 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.360 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.360 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.360 { 00:13:19.360 "cntlid": 67, 00:13:19.360 "qid": 0, 00:13:19.360 "state": "enabled", 00:13:19.360 "thread": "nvmf_tgt_poll_group_000", 00:13:19.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:19.360 "listen_address": { 00:13:19.360 "trtype": "RDMA", 00:13:19.360 "adrfam": "IPv4", 00:13:19.360 "traddr": "192.168.100.8", 00:13:19.360 "trsvcid": "4420" 00:13:19.360 }, 00:13:19.360 "peer_address": { 00:13:19.360 "trtype": "RDMA", 00:13:19.360 "adrfam": "IPv4", 00:13:19.360 "traddr": "192.168.100.8", 00:13:19.360 "trsvcid": "37523" 00:13:19.360 }, 00:13:19.360 "auth": { 00:13:19.360 "state": "completed", 00:13:19.360 "digest": "sha384", 00:13:19.360 "dhgroup": "ffdhe3072" 00:13:19.360 } 00:13:19.360 } 00:13:19.360 ]' 00:13:19.360 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:19.360 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:19.360 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:19.360 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:19.360 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.620 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.620 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.620 21:10:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.620 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:19.620 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:20.187 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.447 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:20.447 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.447 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.447 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.447 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.447 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:20.447 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.706 21:10:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.706 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.966 { 00:13:20.966 "cntlid": 69, 00:13:20.966 "qid": 0, 00:13:20.966 "state": "enabled", 00:13:20.966 "thread": "nvmf_tgt_poll_group_000", 00:13:20.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:20.966 "listen_address": { 00:13:20.966 "trtype": "RDMA", 00:13:20.966 "adrfam": "IPv4", 00:13:20.966 "traddr": "192.168.100.8", 00:13:20.966 "trsvcid": "4420" 00:13:20.966 }, 00:13:20.966 "peer_address": { 00:13:20.966 "trtype": "RDMA", 00:13:20.966 "adrfam": "IPv4", 00:13:20.966 "traddr": "192.168.100.8", 00:13:20.966 "trsvcid": "48119" 00:13:20.966 }, 00:13:20.966 "auth": { 00:13:20.966 "state": "completed", 00:13:20.966 "digest": "sha384", 00:13:20.966 "dhgroup": "ffdhe3072" 00:13:20.966 } 00:13:20.966 } 00:13:20.966 ]' 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:20.966 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.227 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:21.227 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.227 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.227 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.227 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.227 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:21.227 21:10:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:21.796 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:22.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:22.054 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:22.054 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.054 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.054 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.054 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:22.055 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:22.055 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:22.313 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.314 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.575 00:13:22.575 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.576 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.576 21:10:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.880 { 00:13:22.880 "cntlid": 71, 00:13:22.880 "qid": 0, 00:13:22.880 "state": "enabled", 00:13:22.880 "thread": "nvmf_tgt_poll_group_000", 00:13:22.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:22.880 "listen_address": { 00:13:22.880 "trtype": "RDMA", 00:13:22.880 "adrfam": "IPv4", 00:13:22.880 "traddr": "192.168.100.8", 00:13:22.880 "trsvcid": "4420" 00:13:22.880 }, 00:13:22.880 "peer_address": { 00:13:22.880 "trtype": "RDMA", 00:13:22.880 "adrfam": "IPv4", 00:13:22.880 "traddr": "192.168.100.8", 00:13:22.880 "trsvcid": "57467" 00:13:22.880 }, 00:13:22.880 "auth": { 00:13:22.880 "state": "completed", 00:13:22.880 "digest": "sha384", 00:13:22.880 "dhgroup": "ffdhe3072" 00:13:22.880 } 00:13:22.880 } 00:13:22.880 ]' 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.880 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.159 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:23.159 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:23.739 21:10:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.739 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:23.739 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.739 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.739 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.739 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:23.739 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.739 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:23.739 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.996 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.253 00:13:24.253 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.253 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.253 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.511 { 00:13:24.511 "cntlid": 73, 00:13:24.511 "qid": 0, 00:13:24.511 "state": "enabled", 00:13:24.511 "thread": "nvmf_tgt_poll_group_000", 00:13:24.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:24.511 "listen_address": { 00:13:24.511 "trtype": "RDMA", 00:13:24.511 "adrfam": "IPv4", 00:13:24.511 "traddr": "192.168.100.8", 00:13:24.511 "trsvcid": "4420" 00:13:24.511 }, 00:13:24.511 "peer_address": { 00:13:24.511 "trtype": "RDMA", 00:13:24.511 "adrfam": "IPv4", 00:13:24.511 "traddr": "192.168.100.8", 00:13:24.511 "trsvcid": "32797" 00:13:24.511 }, 00:13:24.511 "auth": { 00:13:24.511 "state": "completed", 00:13:24.511 "digest": "sha384", 00:13:24.511 "dhgroup": "ffdhe4096" 00:13:24.511 } 00:13:24.511 } 00:13:24.511 ]' 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.511 21:10:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.769 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:24.769 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:25.335 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.335 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:25.335 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.335 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.335 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.335 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.335 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:25.335 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.593 21:10:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.851 00:13:25.851 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:25.851 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.851 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.109 { 00:13:26.109 "cntlid": 75, 00:13:26.109 "qid": 0, 00:13:26.109 "state": "enabled", 00:13:26.109 "thread": "nvmf_tgt_poll_group_000", 00:13:26.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:26.109 "listen_address": { 00:13:26.109 "trtype": "RDMA", 00:13:26.109 "adrfam": "IPv4", 00:13:26.109 "traddr": "192.168.100.8", 00:13:26.109 "trsvcid": "4420" 00:13:26.109 }, 00:13:26.109 "peer_address": { 00:13:26.109 "trtype": "RDMA", 00:13:26.109 "adrfam": "IPv4", 00:13:26.109 "traddr": "192.168.100.8", 00:13:26.109 "trsvcid": "36657" 00:13:26.109 }, 00:13:26.109 "auth": { 00:13:26.109 "state": "completed", 00:13:26.109 "digest": "sha384", 00:13:26.109 "dhgroup": "ffdhe4096" 00:13:26.109 } 00:13:26.109 } 00:13:26.109 ]' 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.109 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.368 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:26.368 21:10:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:26.933 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.933 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:26.933 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.933 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.933 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.933 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.934 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:26.934 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:27.191 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.192 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.449 00:13:27.449 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.449 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.449 21:10:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.707 { 00:13:27.707 "cntlid": 77, 00:13:27.707 "qid": 0, 00:13:27.707 "state": "enabled", 00:13:27.707 "thread": "nvmf_tgt_poll_group_000", 00:13:27.707 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:27.707 "listen_address": { 00:13:27.707 "trtype": "RDMA", 00:13:27.707 "adrfam": "IPv4", 00:13:27.707 "traddr": "192.168.100.8", 00:13:27.707 "trsvcid": "4420" 00:13:27.707 }, 00:13:27.707 "peer_address": { 00:13:27.707 "trtype": "RDMA", 00:13:27.707 "adrfam": "IPv4", 00:13:27.707 "traddr": "192.168.100.8", 00:13:27.707 "trsvcid": "33543" 00:13:27.707 }, 00:13:27.707 "auth": { 00:13:27.707 "state": "completed", 00:13:27.707 "digest": "sha384", 00:13:27.707 "dhgroup": "ffdhe4096" 00:13:27.707 } 00:13:27.707 } 00:13:27.707 ]' 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.707 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.965 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:27.965 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:28.529 21:10:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.787 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.045 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.045 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:29.045 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.045 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.303 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.303 { 00:13:29.303 "cntlid": 79, 00:13:29.303 "qid": 0, 00:13:29.303 "state": "enabled", 00:13:29.303 "thread": "nvmf_tgt_poll_group_000", 00:13:29.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:29.303 "listen_address": { 00:13:29.303 "trtype": "RDMA", 00:13:29.303 "adrfam": "IPv4", 00:13:29.303 "traddr": "192.168.100.8", 00:13:29.303 "trsvcid": "4420" 00:13:29.303 }, 00:13:29.303 "peer_address": { 00:13:29.303 "trtype": "RDMA", 00:13:29.303 "adrfam": "IPv4", 00:13:29.303 "traddr": "192.168.100.8", 00:13:29.303 "trsvcid": "38398" 00:13:29.303 }, 00:13:29.303 "auth": { 00:13:29.303 "state": "completed", 00:13:29.303 "digest": "sha384", 00:13:29.303 "dhgroup": "ffdhe4096" 00:13:29.303 } 00:13:29.303 } 00:13:29.303 ]' 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:29.303 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.560 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:29.560 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.560 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.560 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.560 21:10:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.818 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:29.818 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:30.384 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.384 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:30.384 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.384 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.384 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.384 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:30.384 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.384 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:30.384 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.642 21:10:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.899 00:13:30.899 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.899 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.899 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.156 { 00:13:31.156 "cntlid": 81, 00:13:31.156 "qid": 0, 00:13:31.156 "state": "enabled", 00:13:31.156 "thread": "nvmf_tgt_poll_group_000", 00:13:31.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:31.156 "listen_address": { 00:13:31.156 "trtype": "RDMA", 00:13:31.156 "adrfam": "IPv4", 00:13:31.156 "traddr": "192.168.100.8", 00:13:31.156 "trsvcid": "4420" 00:13:31.156 }, 00:13:31.156 "peer_address": { 00:13:31.156 "trtype": "RDMA", 00:13:31.156 "adrfam": "IPv4", 00:13:31.156 "traddr": "192.168.100.8", 00:13:31.156 "trsvcid": "38499" 00:13:31.156 }, 00:13:31.156 "auth": { 00:13:31.156 "state": "completed", 00:13:31.156 "digest": "sha384", 00:13:31.156 "dhgroup": "ffdhe6144" 00:13:31.156 } 00:13:31.156 } 00:13:31.156 ]' 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.156 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.413 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:31.414 21:10:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:31.978 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.236 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:32.236 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.236 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.236 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.236 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.237 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.802 00:13:32.802 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.802 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.802 21:10:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.802 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.802 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.802 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.802 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.802 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.802 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.802 { 00:13:32.802 "cntlid": 83, 00:13:32.802 "qid": 0, 00:13:32.802 "state": "enabled", 00:13:32.802 "thread": "nvmf_tgt_poll_group_000", 00:13:32.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:32.802 "listen_address": { 00:13:32.802 "trtype": "RDMA", 00:13:32.802 "adrfam": "IPv4", 00:13:32.802 "traddr": "192.168.100.8", 00:13:32.802 "trsvcid": "4420" 00:13:32.802 }, 00:13:32.802 "peer_address": { 00:13:32.802 "trtype": "RDMA", 00:13:32.802 "adrfam": "IPv4", 00:13:32.802 "traddr": "192.168.100.8", 00:13:32.802 "trsvcid": "54406" 00:13:32.802 }, 00:13:32.802 "auth": { 00:13:32.802 "state": "completed", 00:13:32.802 "digest": "sha384", 00:13:32.802 "dhgroup": "ffdhe6144" 00:13:32.802 } 00:13:32.802 } 00:13:32.802 ]' 00:13:32.802 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.802 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:32.802 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.802 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:32.802 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.060 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.060 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.060 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.060 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:33.060 21:10:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:33.624 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.881 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:33.881 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.881 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.882 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.882 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.882 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:33.882 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.139 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.397 00:13:34.397 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.397 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.397 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.655 { 00:13:34.655 "cntlid": 85, 00:13:34.655 "qid": 0, 00:13:34.655 "state": "enabled", 00:13:34.655 "thread": "nvmf_tgt_poll_group_000", 00:13:34.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:34.655 "listen_address": { 00:13:34.655 "trtype": "RDMA", 00:13:34.655 "adrfam": "IPv4", 00:13:34.655 "traddr": "192.168.100.8", 00:13:34.655 "trsvcid": "4420" 00:13:34.655 }, 00:13:34.655 "peer_address": { 00:13:34.655 "trtype": "RDMA", 00:13:34.655 "adrfam": "IPv4", 00:13:34.655 "traddr": "192.168.100.8", 00:13:34.655 "trsvcid": "42274" 00:13:34.655 }, 00:13:34.655 "auth": { 00:13:34.655 "state": "completed", 00:13:34.655 "digest": "sha384", 00:13:34.655 "dhgroup": "ffdhe6144" 00:13:34.655 } 00:13:34.655 } 00:13:34.655 ]' 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.655 21:10:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.913 21:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:34.913 21:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:35.478 21:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.478 21:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:35.478 21:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.478 21:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.478 21:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.478 21:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.478 21:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:35.478 21:10:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.736 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.994 00:13:35.994 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.994 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.994 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.252 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.252 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.252 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.252 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.252 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.252 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.252 { 00:13:36.252 "cntlid": 87, 00:13:36.252 "qid": 0, 00:13:36.252 "state": "enabled", 00:13:36.252 "thread": "nvmf_tgt_poll_group_000", 00:13:36.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:36.252 "listen_address": { 00:13:36.252 "trtype": "RDMA", 00:13:36.252 "adrfam": "IPv4", 00:13:36.252 "traddr": "192.168.100.8", 00:13:36.252 "trsvcid": "4420" 00:13:36.252 }, 00:13:36.252 "peer_address": { 00:13:36.253 "trtype": "RDMA", 00:13:36.253 "adrfam": "IPv4", 00:13:36.253 "traddr": "192.168.100.8", 00:13:36.253 "trsvcid": "55144" 00:13:36.253 }, 00:13:36.253 "auth": { 00:13:36.253 "state": "completed", 00:13:36.253 "digest": "sha384", 00:13:36.253 "dhgroup": "ffdhe6144" 00:13:36.253 } 00:13:36.253 } 00:13:36.253 ]' 00:13:36.253 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.253 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:36.253 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.253 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:36.253 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.511 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.511 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.511 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.511 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:36.511 21:10:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:37.077 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:37.335 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.336 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.336 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.336 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.593 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.593 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.594 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.594 21:10:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:37.852 00:13:37.852 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.852 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.852 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.109 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.109 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.109 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.109 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.109 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.109 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.109 { 00:13:38.109 "cntlid": 89, 00:13:38.109 "qid": 0, 00:13:38.109 "state": "enabled", 00:13:38.109 "thread": "nvmf_tgt_poll_group_000", 00:13:38.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:38.109 "listen_address": { 00:13:38.109 "trtype": "RDMA", 00:13:38.109 "adrfam": "IPv4", 00:13:38.109 "traddr": "192.168.100.8", 00:13:38.109 "trsvcid": "4420" 00:13:38.109 }, 00:13:38.109 "peer_address": { 00:13:38.109 "trtype": "RDMA", 00:13:38.109 "adrfam": "IPv4", 00:13:38.109 "traddr": "192.168.100.8", 00:13:38.109 "trsvcid": "55403" 00:13:38.109 }, 00:13:38.109 "auth": { 00:13:38.109 "state": "completed", 00:13:38.109 "digest": "sha384", 00:13:38.109 "dhgroup": "ffdhe8192" 00:13:38.109 } 00:13:38.109 } 00:13:38.109 ]' 00:13:38.109 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.109 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:38.110 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.110 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:38.110 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.367 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.367 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.367 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.367 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:38.367 21:10:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:38.932 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.189 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.447 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.447 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.447 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.447 21:10:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:39.705 00:13:39.705 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.705 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.705 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.962 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.962 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.962 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.962 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.962 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.962 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.962 { 00:13:39.962 "cntlid": 91, 00:13:39.962 "qid": 0, 00:13:39.962 "state": "enabled", 00:13:39.962 "thread": "nvmf_tgt_poll_group_000", 00:13:39.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:39.962 "listen_address": { 00:13:39.962 "trtype": "RDMA", 00:13:39.963 "adrfam": "IPv4", 00:13:39.963 "traddr": "192.168.100.8", 00:13:39.963 "trsvcid": "4420" 00:13:39.963 }, 00:13:39.963 "peer_address": { 00:13:39.963 "trtype": "RDMA", 00:13:39.963 "adrfam": "IPv4", 00:13:39.963 "traddr": "192.168.100.8", 00:13:39.963 "trsvcid": "56608" 00:13:39.963 }, 00:13:39.963 "auth": { 00:13:39.963 "state": "completed", 00:13:39.963 "digest": "sha384", 00:13:39.963 "dhgroup": "ffdhe8192" 00:13:39.963 } 00:13:39.963 } 00:13:39.963 ]' 00:13:39.963 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.963 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:39.963 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.963 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:39.963 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.963 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.963 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.963 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.220 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:40.220 21:10:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:40.786 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.043 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:41.043 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.043 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.043 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.043 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.043 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:41.043 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:41.043 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:13:41.043 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.043 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:41.043 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:41.044 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:41.044 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.044 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.044 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.044 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.044 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.044 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.044 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.044 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:41.610 00:13:41.610 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.610 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.610 21:10:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.867 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.867 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.867 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.867 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.867 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.867 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.867 { 00:13:41.867 "cntlid": 93, 00:13:41.867 "qid": 0, 00:13:41.867 "state": "enabled", 00:13:41.867 "thread": "nvmf_tgt_poll_group_000", 00:13:41.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:41.867 "listen_address": { 00:13:41.867 "trtype": "RDMA", 00:13:41.867 "adrfam": "IPv4", 00:13:41.867 "traddr": "192.168.100.8", 00:13:41.867 "trsvcid": "4420" 00:13:41.867 }, 00:13:41.867 "peer_address": { 00:13:41.867 "trtype": "RDMA", 00:13:41.867 "adrfam": "IPv4", 00:13:41.867 "traddr": "192.168.100.8", 00:13:41.867 "trsvcid": "47487" 00:13:41.867 }, 00:13:41.867 "auth": { 00:13:41.867 "state": "completed", 00:13:41.867 "digest": "sha384", 00:13:41.867 "dhgroup": "ffdhe8192" 00:13:41.867 } 00:13:41.867 } 00:13:41.867 ]' 00:13:41.867 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.867 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:41.868 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.868 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:41.868 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.868 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.868 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.868 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.124 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:42.124 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:42.691 21:10:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.691 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:42.691 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.691 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.691 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.691 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.691 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:42.691 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.949 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:43.515 00:13:43.515 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.515 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.515 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.515 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.515 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.515 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.515 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.772 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.772 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.772 { 00:13:43.772 "cntlid": 95, 00:13:43.772 "qid": 0, 00:13:43.772 "state": "enabled", 00:13:43.772 "thread": "nvmf_tgt_poll_group_000", 00:13:43.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:43.772 "listen_address": { 00:13:43.772 "trtype": "RDMA", 00:13:43.772 "adrfam": "IPv4", 00:13:43.772 "traddr": "192.168.100.8", 00:13:43.772 "trsvcid": "4420" 00:13:43.772 }, 00:13:43.772 "peer_address": { 00:13:43.772 "trtype": "RDMA", 00:13:43.772 "adrfam": "IPv4", 00:13:43.772 "traddr": "192.168.100.8", 00:13:43.772 "trsvcid": "41684" 00:13:43.772 }, 00:13:43.772 "auth": { 00:13:43.772 "state": "completed", 00:13:43.772 "digest": "sha384", 00:13:43.772 "dhgroup": "ffdhe8192" 00:13:43.772 } 00:13:43.772 } 00:13:43.772 ]' 00:13:43.772 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.772 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:43.772 21:10:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.772 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:43.772 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.772 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.772 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.772 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.029 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:44.029 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:44.595 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.595 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:44.595 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.595 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.595 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.595 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:44.595 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:44.595 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.595 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:44.595 21:10:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:44.853 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:13:44.853 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.853 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:44.853 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:44.853 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:44.853 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.853 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.853 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.853 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.853 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.854 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.854 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.854 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:45.111 00:13:45.111 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.111 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.111 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.369 { 00:13:45.369 "cntlid": 97, 00:13:45.369 "qid": 0, 00:13:45.369 "state": "enabled", 00:13:45.369 "thread": "nvmf_tgt_poll_group_000", 00:13:45.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:45.369 "listen_address": { 00:13:45.369 "trtype": "RDMA", 00:13:45.369 "adrfam": "IPv4", 00:13:45.369 "traddr": "192.168.100.8", 00:13:45.369 "trsvcid": "4420" 00:13:45.369 }, 00:13:45.369 "peer_address": { 00:13:45.369 "trtype": "RDMA", 00:13:45.369 "adrfam": "IPv4", 00:13:45.369 "traddr": "192.168.100.8", 00:13:45.369 "trsvcid": "39882" 00:13:45.369 }, 00:13:45.369 "auth": { 00:13:45.369 "state": "completed", 00:13:45.369 "digest": "sha512", 00:13:45.369 "dhgroup": "null" 00:13:45.369 } 00:13:45.369 } 00:13:45.369 ]' 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.369 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.627 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:45.627 21:10:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:46.192 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.192 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:46.192 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.192 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.192 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.192 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.192 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:46.192 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.450 21:10:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.709 00:13:46.709 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.709 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.709 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.967 { 00:13:46.967 "cntlid": 99, 00:13:46.967 "qid": 0, 00:13:46.967 "state": "enabled", 00:13:46.967 "thread": "nvmf_tgt_poll_group_000", 00:13:46.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:46.967 "listen_address": { 00:13:46.967 "trtype": "RDMA", 00:13:46.967 "adrfam": "IPv4", 00:13:46.967 "traddr": "192.168.100.8", 00:13:46.967 "trsvcid": "4420" 00:13:46.967 }, 00:13:46.967 "peer_address": { 00:13:46.967 "trtype": "RDMA", 00:13:46.967 "adrfam": "IPv4", 00:13:46.967 "traddr": "192.168.100.8", 00:13:46.967 "trsvcid": "45364" 00:13:46.967 }, 00:13:46.967 "auth": { 00:13:46.967 "state": "completed", 00:13:46.967 "digest": "sha512", 00:13:46.967 "dhgroup": "null" 00:13:46.967 } 00:13:46.967 } 00:13:46.967 ]' 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.967 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.225 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:47.225 21:10:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:47.791 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.791 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:47.791 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.791 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.791 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.791 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.791 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:47.791 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.049 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:48.307 00:13:48.307 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.307 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.307 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.564 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.564 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.564 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.564 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.564 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.564 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.564 { 00:13:48.564 "cntlid": 101, 00:13:48.565 "qid": 0, 00:13:48.565 "state": "enabled", 00:13:48.565 "thread": "nvmf_tgt_poll_group_000", 00:13:48.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:48.565 "listen_address": { 00:13:48.565 "trtype": "RDMA", 00:13:48.565 "adrfam": "IPv4", 00:13:48.565 "traddr": "192.168.100.8", 00:13:48.565 "trsvcid": "4420" 00:13:48.565 }, 00:13:48.565 "peer_address": { 00:13:48.565 "trtype": "RDMA", 00:13:48.565 "adrfam": "IPv4", 00:13:48.565 "traddr": "192.168.100.8", 00:13:48.565 "trsvcid": "54819" 00:13:48.565 }, 00:13:48.565 "auth": { 00:13:48.565 "state": "completed", 00:13:48.565 "digest": "sha512", 00:13:48.565 "dhgroup": "null" 00:13:48.565 } 00:13:48.565 } 00:13:48.565 ]' 00:13:48.565 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.565 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:48.565 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.565 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:48.565 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.565 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.565 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.565 21:10:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.822 21:10:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:48.822 21:10:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:49.387 21:10:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.645 21:10:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:49.645 21:10:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.645 21:10:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.645 21:10:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.645 21:10:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.645 21:10:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:49.645 21:10:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.645 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.903 00:13:49.903 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.903 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.903 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.160 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.160 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.160 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.160 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.161 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.161 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.161 { 00:13:50.161 "cntlid": 103, 00:13:50.161 "qid": 0, 00:13:50.161 "state": "enabled", 00:13:50.161 "thread": "nvmf_tgt_poll_group_000", 00:13:50.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:50.161 "listen_address": { 00:13:50.161 "trtype": "RDMA", 00:13:50.161 "adrfam": "IPv4", 00:13:50.161 "traddr": "192.168.100.8", 00:13:50.161 "trsvcid": "4420" 00:13:50.161 }, 00:13:50.161 "peer_address": { 00:13:50.161 "trtype": "RDMA", 00:13:50.161 "adrfam": "IPv4", 00:13:50.161 "traddr": "192.168.100.8", 00:13:50.161 "trsvcid": "45761" 00:13:50.161 }, 00:13:50.161 "auth": { 00:13:50.161 "state": "completed", 00:13:50.161 "digest": "sha512", 00:13:50.161 "dhgroup": "null" 00:13:50.161 } 00:13:50.161 } 00:13:50.161 ]' 00:13:50.161 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.161 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:50.161 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.161 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:50.161 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.161 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.161 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.161 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.418 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:50.418 21:10:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:50.983 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.240 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.497 00:13:51.497 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.497 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.497 21:10:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.755 { 00:13:51.755 "cntlid": 105, 00:13:51.755 "qid": 0, 00:13:51.755 "state": "enabled", 00:13:51.755 "thread": "nvmf_tgt_poll_group_000", 00:13:51.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:51.755 "listen_address": { 00:13:51.755 "trtype": "RDMA", 00:13:51.755 "adrfam": "IPv4", 00:13:51.755 "traddr": "192.168.100.8", 00:13:51.755 "trsvcid": "4420" 00:13:51.755 }, 00:13:51.755 "peer_address": { 00:13:51.755 "trtype": "RDMA", 00:13:51.755 "adrfam": "IPv4", 00:13:51.755 "traddr": "192.168.100.8", 00:13:51.755 "trsvcid": "59053" 00:13:51.755 }, 00:13:51.755 "auth": { 00:13:51.755 "state": "completed", 00:13:51.755 "digest": "sha512", 00:13:51.755 "dhgroup": "ffdhe2048" 00:13:51.755 } 00:13:51.755 } 00:13:51.755 ]' 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.755 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.011 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:52.011 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:52.576 21:10:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.833 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.834 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.834 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.834 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.834 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:52.834 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.091 00:13:53.091 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.091 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.091 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.349 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.349 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.349 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.349 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.349 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.349 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.349 { 00:13:53.349 "cntlid": 107, 00:13:53.349 "qid": 0, 00:13:53.349 "state": "enabled", 00:13:53.349 "thread": "nvmf_tgt_poll_group_000", 00:13:53.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:53.349 "listen_address": { 00:13:53.349 "trtype": "RDMA", 00:13:53.349 "adrfam": "IPv4", 00:13:53.349 "traddr": "192.168.100.8", 00:13:53.349 "trsvcid": "4420" 00:13:53.349 }, 00:13:53.349 "peer_address": { 00:13:53.349 "trtype": "RDMA", 00:13:53.349 "adrfam": "IPv4", 00:13:53.349 "traddr": "192.168.100.8", 00:13:53.349 "trsvcid": "49633" 00:13:53.349 }, 00:13:53.349 "auth": { 00:13:53.349 "state": "completed", 00:13:53.349 "digest": "sha512", 00:13:53.349 "dhgroup": "ffdhe2048" 00:13:53.349 } 00:13:53.349 } 00:13:53.349 ]' 00:13:53.349 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.349 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:53.349 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.349 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:53.349 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.606 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.607 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.607 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.607 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:53.607 21:10:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:13:54.172 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.429 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:54.429 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.429 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.429 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.430 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.430 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:54.430 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.688 21:10:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:54.946 00:13:54.946 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.946 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.946 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.946 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.946 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.946 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.946 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.946 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.946 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.946 { 00:13:54.946 "cntlid": 109, 00:13:54.946 "qid": 0, 00:13:54.946 "state": "enabled", 00:13:54.946 "thread": "nvmf_tgt_poll_group_000", 00:13:54.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:54.946 "listen_address": { 00:13:54.946 "trtype": "RDMA", 00:13:54.946 "adrfam": "IPv4", 00:13:54.946 "traddr": "192.168.100.8", 00:13:54.946 "trsvcid": "4420" 00:13:54.946 }, 00:13:54.946 "peer_address": { 00:13:54.946 "trtype": "RDMA", 00:13:54.946 "adrfam": "IPv4", 00:13:54.946 "traddr": "192.168.100.8", 00:13:54.946 "trsvcid": "47222" 00:13:54.946 }, 00:13:54.946 "auth": { 00:13:54.946 "state": "completed", 00:13:54.946 "digest": "sha512", 00:13:54.946 "dhgroup": "ffdhe2048" 00:13:54.946 } 00:13:54.946 } 00:13:54.946 ]' 00:13:54.946 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.204 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:55.204 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.204 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:55.204 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.204 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.204 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.204 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.462 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:55.462 21:10:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:13:56.027 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.027 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:56.027 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.027 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.027 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.027 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.027 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:56.027 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.285 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.542 00:13:56.542 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.542 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.542 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.542 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.542 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.542 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.800 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.800 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.800 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.800 { 00:13:56.800 "cntlid": 111, 00:13:56.800 "qid": 0, 00:13:56.800 "state": "enabled", 00:13:56.800 "thread": "nvmf_tgt_poll_group_000", 00:13:56.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:56.800 "listen_address": { 00:13:56.800 "trtype": "RDMA", 00:13:56.800 "adrfam": "IPv4", 00:13:56.800 "traddr": "192.168.100.8", 00:13:56.800 "trsvcid": "4420" 00:13:56.800 }, 00:13:56.800 "peer_address": { 00:13:56.800 "trtype": "RDMA", 00:13:56.800 "adrfam": "IPv4", 00:13:56.800 "traddr": "192.168.100.8", 00:13:56.800 "trsvcid": "36887" 00:13:56.800 }, 00:13:56.800 "auth": { 00:13:56.800 "state": "completed", 00:13:56.800 "digest": "sha512", 00:13:56.800 "dhgroup": "ffdhe2048" 00:13:56.800 } 00:13:56.800 } 00:13:56.800 ]' 00:13:56.800 21:10:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.800 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:56.800 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.800 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:56.800 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.800 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.800 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.800 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.057 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:57.057 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:13:57.622 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.622 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:57.622 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.622 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.622 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.622 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:57.622 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.622 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:57.622 21:10:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.879 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.136 00:13:58.136 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.136 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.137 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.394 { 00:13:58.394 "cntlid": 113, 00:13:58.394 "qid": 0, 00:13:58.394 "state": "enabled", 00:13:58.394 "thread": "nvmf_tgt_poll_group_000", 00:13:58.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:58.394 "listen_address": { 00:13:58.394 "trtype": "RDMA", 00:13:58.394 "adrfam": "IPv4", 00:13:58.394 "traddr": "192.168.100.8", 00:13:58.394 "trsvcid": "4420" 00:13:58.394 }, 00:13:58.394 "peer_address": { 00:13:58.394 "trtype": "RDMA", 00:13:58.394 "adrfam": "IPv4", 00:13:58.394 "traddr": "192.168.100.8", 00:13:58.394 "trsvcid": "58449" 00:13:58.394 }, 00:13:58.394 "auth": { 00:13:58.394 "state": "completed", 00:13:58.394 "digest": "sha512", 00:13:58.394 "dhgroup": "ffdhe3072" 00:13:58.394 } 00:13:58.394 } 00:13:58.394 ]' 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.394 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.652 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:58.652 21:10:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:13:59.217 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.217 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:13:59.217 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.217 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.218 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.218 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.218 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:59.218 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.475 21:10:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.732 00:13:59.732 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.732 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.732 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:59.989 { 00:13:59.989 "cntlid": 115, 00:13:59.989 "qid": 0, 00:13:59.989 "state": "enabled", 00:13:59.989 "thread": "nvmf_tgt_poll_group_000", 00:13:59.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:13:59.989 "listen_address": { 00:13:59.989 "trtype": "RDMA", 00:13:59.989 "adrfam": "IPv4", 00:13:59.989 "traddr": "192.168.100.8", 00:13:59.989 "trsvcid": "4420" 00:13:59.989 }, 00:13:59.989 "peer_address": { 00:13:59.989 "trtype": "RDMA", 00:13:59.989 "adrfam": "IPv4", 00:13:59.989 "traddr": "192.168.100.8", 00:13:59.989 "trsvcid": "43737" 00:13:59.989 }, 00:13:59.989 "auth": { 00:13:59.989 "state": "completed", 00:13:59.989 "digest": "sha512", 00:13:59.989 "dhgroup": "ffdhe3072" 00:13:59.989 } 00:13:59.989 } 00:13:59.989 ]' 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.989 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.247 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:14:00.247 21:10:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:14:00.844 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.844 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.117 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.388 00:14:01.388 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.388 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.388 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.646 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.646 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.646 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.646 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.646 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.646 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.646 { 00:14:01.646 "cntlid": 117, 00:14:01.646 "qid": 0, 00:14:01.646 "state": "enabled", 00:14:01.646 "thread": "nvmf_tgt_poll_group_000", 00:14:01.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:01.646 "listen_address": { 00:14:01.646 "trtype": "RDMA", 00:14:01.646 "adrfam": "IPv4", 00:14:01.646 "traddr": "192.168.100.8", 00:14:01.646 "trsvcid": "4420" 00:14:01.646 }, 00:14:01.646 "peer_address": { 00:14:01.646 "trtype": "RDMA", 00:14:01.646 "adrfam": "IPv4", 00:14:01.646 "traddr": "192.168.100.8", 00:14:01.646 "trsvcid": "55410" 00:14:01.646 }, 00:14:01.646 "auth": { 00:14:01.646 "state": "completed", 00:14:01.646 "digest": "sha512", 00:14:01.646 "dhgroup": "ffdhe3072" 00:14:01.646 } 00:14:01.646 } 00:14:01.646 ]' 00:14:01.646 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.646 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:01.646 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.646 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:01.646 21:10:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.646 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.646 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.646 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.903 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:14:01.903 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:14:02.467 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.725 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:02.725 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.725 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.725 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.725 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.725 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:02.725 21:10:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.725 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:02.982 00:14:02.983 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.983 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.983 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.241 { 00:14:03.241 "cntlid": 119, 00:14:03.241 "qid": 0, 00:14:03.241 "state": "enabled", 00:14:03.241 "thread": "nvmf_tgt_poll_group_000", 00:14:03.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:03.241 "listen_address": { 00:14:03.241 "trtype": "RDMA", 00:14:03.241 "adrfam": "IPv4", 00:14:03.241 "traddr": "192.168.100.8", 00:14:03.241 "trsvcid": "4420" 00:14:03.241 }, 00:14:03.241 "peer_address": { 00:14:03.241 "trtype": "RDMA", 00:14:03.241 "adrfam": "IPv4", 00:14:03.241 "traddr": "192.168.100.8", 00:14:03.241 "trsvcid": "59134" 00:14:03.241 }, 00:14:03.241 "auth": { 00:14:03.241 "state": "completed", 00:14:03.241 "digest": "sha512", 00:14:03.241 "dhgroup": "ffdhe3072" 00:14:03.241 } 00:14:03.241 } 00:14:03.241 ]' 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.241 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.499 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:03.499 21:10:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:04.065 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.322 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:04.322 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.322 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.322 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.322 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:04.322 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.322 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:04.322 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.580 21:10:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:04.838 00:14:04.838 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.838 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.838 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.838 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.838 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.838 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.838 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.838 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.838 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.838 { 00:14:04.838 "cntlid": 121, 00:14:04.838 "qid": 0, 00:14:04.838 "state": "enabled", 00:14:04.838 "thread": "nvmf_tgt_poll_group_000", 00:14:04.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:04.838 "listen_address": { 00:14:04.838 "trtype": "RDMA", 00:14:04.838 "adrfam": "IPv4", 00:14:04.838 "traddr": "192.168.100.8", 00:14:04.838 "trsvcid": "4420" 00:14:04.838 }, 00:14:04.838 "peer_address": { 00:14:04.838 "trtype": "RDMA", 00:14:04.838 "adrfam": "IPv4", 00:14:04.838 "traddr": "192.168.100.8", 00:14:04.838 "trsvcid": "56807" 00:14:04.838 }, 00:14:04.838 "auth": { 00:14:04.838 "state": "completed", 00:14:04.838 "digest": "sha512", 00:14:04.838 "dhgroup": "ffdhe4096" 00:14:04.838 } 00:14:04.838 } 00:14:04.838 ]' 00:14:04.838 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.838 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:05.096 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.096 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:05.096 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.096 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.096 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.096 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.096 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:14:05.096 21:10:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:14:06.032 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.033 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:06.291 00:14:06.291 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.291 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.291 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.549 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.549 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.549 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.549 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.549 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.549 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.549 { 00:14:06.549 "cntlid": 123, 00:14:06.549 "qid": 0, 00:14:06.549 "state": "enabled", 00:14:06.549 "thread": "nvmf_tgt_poll_group_000", 00:14:06.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:06.549 "listen_address": { 00:14:06.549 "trtype": "RDMA", 00:14:06.549 "adrfam": "IPv4", 00:14:06.549 "traddr": "192.168.100.8", 00:14:06.550 "trsvcid": "4420" 00:14:06.550 }, 00:14:06.550 "peer_address": { 00:14:06.550 "trtype": "RDMA", 00:14:06.550 "adrfam": "IPv4", 00:14:06.550 "traddr": "192.168.100.8", 00:14:06.550 "trsvcid": "58276" 00:14:06.550 }, 00:14:06.550 "auth": { 00:14:06.550 "state": "completed", 00:14:06.550 "digest": "sha512", 00:14:06.550 "dhgroup": "ffdhe4096" 00:14:06.550 } 00:14:06.550 } 00:14:06.550 ]' 00:14:06.550 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.550 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:06.550 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.550 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:06.550 21:10:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.811 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.811 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.811 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.811 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:14:06.811 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:14:07.378 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.638 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:07.638 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.638 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.638 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.638 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.638 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:07.638 21:10:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.897 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:08.156 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:08.156 { 00:14:08.156 "cntlid": 125, 00:14:08.156 "qid": 0, 00:14:08.156 "state": "enabled", 00:14:08.156 "thread": "nvmf_tgt_poll_group_000", 00:14:08.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:08.156 "listen_address": { 00:14:08.156 "trtype": "RDMA", 00:14:08.156 "adrfam": "IPv4", 00:14:08.156 "traddr": "192.168.100.8", 00:14:08.156 "trsvcid": "4420" 00:14:08.156 }, 00:14:08.156 "peer_address": { 00:14:08.156 "trtype": "RDMA", 00:14:08.156 "adrfam": "IPv4", 00:14:08.156 "traddr": "192.168.100.8", 00:14:08.156 "trsvcid": "52208" 00:14:08.156 }, 00:14:08.156 "auth": { 00:14:08.156 "state": "completed", 00:14:08.156 "digest": "sha512", 00:14:08.156 "dhgroup": "ffdhe4096" 00:14:08.156 } 00:14:08.156 } 00:14:08.156 ]' 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:08.156 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.414 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:08.414 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.414 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.414 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.414 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.673 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:14:08.674 21:10:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:14:09.241 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.241 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:09.241 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.241 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.241 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.241 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.241 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:09.241 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.500 21:10:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.759 00:14:09.759 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.759 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.759 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.018 { 00:14:10.018 "cntlid": 127, 00:14:10.018 "qid": 0, 00:14:10.018 "state": "enabled", 00:14:10.018 "thread": "nvmf_tgt_poll_group_000", 00:14:10.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:10.018 "listen_address": { 00:14:10.018 "trtype": "RDMA", 00:14:10.018 "adrfam": "IPv4", 00:14:10.018 "traddr": "192.168.100.8", 00:14:10.018 "trsvcid": "4420" 00:14:10.018 }, 00:14:10.018 "peer_address": { 00:14:10.018 "trtype": "RDMA", 00:14:10.018 "adrfam": "IPv4", 00:14:10.018 "traddr": "192.168.100.8", 00:14:10.018 "trsvcid": "35754" 00:14:10.018 }, 00:14:10.018 "auth": { 00:14:10.018 "state": "completed", 00:14:10.018 "digest": "sha512", 00:14:10.018 "dhgroup": "ffdhe4096" 00:14:10.018 } 00:14:10.018 } 00:14:10.018 ]' 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.018 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.277 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:10.277 21:10:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:10.845 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.845 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:10.845 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.845 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.845 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.845 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:10.845 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.845 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:10.845 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.104 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.363 00:14:11.363 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.363 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.363 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.622 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.622 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.622 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.622 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.622 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.622 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.622 { 00:14:11.622 "cntlid": 129, 00:14:11.622 "qid": 0, 00:14:11.622 "state": "enabled", 00:14:11.622 "thread": "nvmf_tgt_poll_group_000", 00:14:11.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:11.622 "listen_address": { 00:14:11.622 "trtype": "RDMA", 00:14:11.622 "adrfam": "IPv4", 00:14:11.622 "traddr": "192.168.100.8", 00:14:11.622 "trsvcid": "4420" 00:14:11.622 }, 00:14:11.622 "peer_address": { 00:14:11.622 "trtype": "RDMA", 00:14:11.622 "adrfam": "IPv4", 00:14:11.622 "traddr": "192.168.100.8", 00:14:11.622 "trsvcid": "59168" 00:14:11.622 }, 00:14:11.622 "auth": { 00:14:11.622 "state": "completed", 00:14:11.622 "digest": "sha512", 00:14:11.622 "dhgroup": "ffdhe6144" 00:14:11.622 } 00:14:11.622 } 00:14:11.622 ]' 00:14:11.622 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.622 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:11.622 21:10:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.622 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:11.622 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.881 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.881 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.881 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.881 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:14:11.881 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:14:12.448 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.706 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:12.706 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.706 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.706 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.706 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.706 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:12.706 21:10:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:12.965 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:14:12.965 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.966 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:12.966 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:12.966 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.966 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.966 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.966 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.966 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.966 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.966 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.966 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.966 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.224 00:14:13.224 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.224 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.225 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.483 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.483 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.483 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.483 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.483 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.483 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.483 { 00:14:13.483 "cntlid": 131, 00:14:13.483 "qid": 0, 00:14:13.483 "state": "enabled", 00:14:13.483 "thread": "nvmf_tgt_poll_group_000", 00:14:13.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:13.483 "listen_address": { 00:14:13.483 "trtype": "RDMA", 00:14:13.483 "adrfam": "IPv4", 00:14:13.483 "traddr": "192.168.100.8", 00:14:13.483 "trsvcid": "4420" 00:14:13.483 }, 00:14:13.483 "peer_address": { 00:14:13.483 "trtype": "RDMA", 00:14:13.484 "adrfam": "IPv4", 00:14:13.484 "traddr": "192.168.100.8", 00:14:13.484 "trsvcid": "42615" 00:14:13.484 }, 00:14:13.484 "auth": { 00:14:13.484 "state": "completed", 00:14:13.484 "digest": "sha512", 00:14:13.484 "dhgroup": "ffdhe6144" 00:14:13.484 } 00:14:13.484 } 00:14:13.484 ]' 00:14:13.484 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.484 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:13.484 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.484 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:13.484 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.484 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.484 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.484 21:11:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.743 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:14:13.743 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:14:14.310 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.310 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:14.310 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.310 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.310 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.310 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.310 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:14.310 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.569 21:11:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.828 00:14:14.828 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.828 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.828 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.086 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.086 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.086 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.086 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.086 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.086 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.086 { 00:14:15.086 "cntlid": 133, 00:14:15.086 "qid": 0, 00:14:15.086 "state": "enabled", 00:14:15.086 "thread": "nvmf_tgt_poll_group_000", 00:14:15.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:15.086 "listen_address": { 00:14:15.086 "trtype": "RDMA", 00:14:15.086 "adrfam": "IPv4", 00:14:15.086 "traddr": "192.168.100.8", 00:14:15.086 "trsvcid": "4420" 00:14:15.086 }, 00:14:15.086 "peer_address": { 00:14:15.086 "trtype": "RDMA", 00:14:15.086 "adrfam": "IPv4", 00:14:15.086 "traddr": "192.168.100.8", 00:14:15.086 "trsvcid": "42277" 00:14:15.086 }, 00:14:15.086 "auth": { 00:14:15.086 "state": "completed", 00:14:15.086 "digest": "sha512", 00:14:15.086 "dhgroup": "ffdhe6144" 00:14:15.086 } 00:14:15.086 } 00:14:15.086 ]' 00:14:15.086 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.086 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:15.086 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.086 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:15.086 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.345 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.345 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.345 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.345 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:14:15.345 21:11:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:14:15.913 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.171 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:16.171 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.171 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.171 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.172 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.172 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:16.172 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:14:16.172 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:14:16.172 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.431 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:16.431 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:16.431 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:16.431 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.431 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:16.431 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.431 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.431 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.431 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:16.431 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.431 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.689 00:14:16.689 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.690 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.690 21:11:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.948 { 00:14:16.948 "cntlid": 135, 00:14:16.948 "qid": 0, 00:14:16.948 "state": "enabled", 00:14:16.948 "thread": "nvmf_tgt_poll_group_000", 00:14:16.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:16.948 "listen_address": { 00:14:16.948 "trtype": "RDMA", 00:14:16.948 "adrfam": "IPv4", 00:14:16.948 "traddr": "192.168.100.8", 00:14:16.948 "trsvcid": "4420" 00:14:16.948 }, 00:14:16.948 "peer_address": { 00:14:16.948 "trtype": "RDMA", 00:14:16.948 "adrfam": "IPv4", 00:14:16.948 "traddr": "192.168.100.8", 00:14:16.948 "trsvcid": "33126" 00:14:16.948 }, 00:14:16.948 "auth": { 00:14:16.948 "state": "completed", 00:14:16.948 "digest": "sha512", 00:14:16.948 "dhgroup": "ffdhe6144" 00:14:16.948 } 00:14:16.948 } 00:14:16.948 ]' 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.948 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.207 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:17.207 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:17.774 21:11:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.774 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:17.774 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.774 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.774 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.774 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:17.774 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.774 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:17.774 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.033 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.597 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.597 { 00:14:18.597 "cntlid": 137, 00:14:18.597 "qid": 0, 00:14:18.597 "state": "enabled", 00:14:18.597 "thread": "nvmf_tgt_poll_group_000", 00:14:18.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:18.597 "listen_address": { 00:14:18.597 "trtype": "RDMA", 00:14:18.597 "adrfam": "IPv4", 00:14:18.597 "traddr": "192.168.100.8", 00:14:18.597 "trsvcid": "4420" 00:14:18.597 }, 00:14:18.597 "peer_address": { 00:14:18.597 "trtype": "RDMA", 00:14:18.597 "adrfam": "IPv4", 00:14:18.597 "traddr": "192.168.100.8", 00:14:18.597 "trsvcid": "39904" 00:14:18.597 }, 00:14:18.597 "auth": { 00:14:18.597 "state": "completed", 00:14:18.597 "digest": "sha512", 00:14:18.597 "dhgroup": "ffdhe8192" 00:14:18.597 } 00:14:18.597 } 00:14:18.597 ]' 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:18.597 21:11:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.855 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:18.855 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.855 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.855 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.855 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.855 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:14:18.855 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:14:19.420 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.679 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:19.679 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.679 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.679 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.679 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:19.679 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:19.679 21:11:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:19.937 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:14:19.937 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.937 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:19.937 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:19.938 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:19.938 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.938 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.938 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.938 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.938 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.938 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.938 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.938 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.196 00:14:20.196 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.196 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.196 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.455 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.455 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.455 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.455 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.455 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.455 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.455 { 00:14:20.455 "cntlid": 139, 00:14:20.455 "qid": 0, 00:14:20.455 "state": "enabled", 00:14:20.455 "thread": "nvmf_tgt_poll_group_000", 00:14:20.455 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:20.455 "listen_address": { 00:14:20.455 "trtype": "RDMA", 00:14:20.455 "adrfam": "IPv4", 00:14:20.455 "traddr": "192.168.100.8", 00:14:20.455 "trsvcid": "4420" 00:14:20.455 }, 00:14:20.455 "peer_address": { 00:14:20.455 "trtype": "RDMA", 00:14:20.455 "adrfam": "IPv4", 00:14:20.455 "traddr": "192.168.100.8", 00:14:20.455 "trsvcid": "41337" 00:14:20.455 }, 00:14:20.455 "auth": { 00:14:20.455 "state": "completed", 00:14:20.455 "digest": "sha512", 00:14:20.455 "dhgroup": "ffdhe8192" 00:14:20.455 } 00:14:20.455 } 00:14:20.455 ]' 00:14:20.455 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.455 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:20.455 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.455 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:20.714 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.714 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.714 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.714 21:11:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.714 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:14:20.714 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: --dhchap-ctrl-secret DHHC-1:02:YjMzZWU3ZmIxZDY5M2UxMTM4ZmRkZjcyNzg3NDgyMGFhMTM2MTkyYThmMDNjNzFhD1ASVA==: 00:14:21.282 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.541 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:21.541 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.541 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.541 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.541 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.541 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:21.541 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:21.800 21:11:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.059 00:14:22.059 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.059 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.059 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.317 { 00:14:22.317 "cntlid": 141, 00:14:22.317 "qid": 0, 00:14:22.317 "state": "enabled", 00:14:22.317 "thread": "nvmf_tgt_poll_group_000", 00:14:22.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:22.317 "listen_address": { 00:14:22.317 "trtype": "RDMA", 00:14:22.317 "adrfam": "IPv4", 00:14:22.317 "traddr": "192.168.100.8", 00:14:22.317 "trsvcid": "4420" 00:14:22.317 }, 00:14:22.317 "peer_address": { 00:14:22.317 "trtype": "RDMA", 00:14:22.317 "adrfam": "IPv4", 00:14:22.317 "traddr": "192.168.100.8", 00:14:22.317 "trsvcid": "56581" 00:14:22.317 }, 00:14:22.317 "auth": { 00:14:22.317 "state": "completed", 00:14:22.317 "digest": "sha512", 00:14:22.317 "dhgroup": "ffdhe8192" 00:14:22.317 } 00:14:22.317 } 00:14:22.317 ]' 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.317 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.576 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:14:22.576 21:11:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:01:NDhlNDNiNGU3Njk0ODFiMjQyNTVjNGQ1YzI1MzkxZWPfcIF5: 00:14:23.144 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:23.403 21:11:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:23.971 00:14:23.971 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.971 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.971 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.231 { 00:14:24.231 "cntlid": 143, 00:14:24.231 "qid": 0, 00:14:24.231 "state": "enabled", 00:14:24.231 "thread": "nvmf_tgt_poll_group_000", 00:14:24.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:24.231 "listen_address": { 00:14:24.231 "trtype": "RDMA", 00:14:24.231 "adrfam": "IPv4", 00:14:24.231 "traddr": "192.168.100.8", 00:14:24.231 "trsvcid": "4420" 00:14:24.231 }, 00:14:24.231 "peer_address": { 00:14:24.231 "trtype": "RDMA", 00:14:24.231 "adrfam": "IPv4", 00:14:24.231 "traddr": "192.168.100.8", 00:14:24.231 "trsvcid": "55004" 00:14:24.231 }, 00:14:24.231 "auth": { 00:14:24.231 "state": "completed", 00:14:24.231 "digest": "sha512", 00:14:24.231 "dhgroup": "ffdhe8192" 00:14:24.231 } 00:14:24.231 } 00:14:24.231 ]' 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.231 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.490 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:24.490 21:11:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:25.059 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.059 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:25.059 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.059 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.319 21:11:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.887 00:14:25.887 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.887 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.887 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.147 { 00:14:26.147 "cntlid": 145, 00:14:26.147 "qid": 0, 00:14:26.147 "state": "enabled", 00:14:26.147 "thread": "nvmf_tgt_poll_group_000", 00:14:26.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:26.147 "listen_address": { 00:14:26.147 "trtype": "RDMA", 00:14:26.147 "adrfam": "IPv4", 00:14:26.147 "traddr": "192.168.100.8", 00:14:26.147 "trsvcid": "4420" 00:14:26.147 }, 00:14:26.147 "peer_address": { 00:14:26.147 "trtype": "RDMA", 00:14:26.147 "adrfam": "IPv4", 00:14:26.147 "traddr": "192.168.100.8", 00:14:26.147 "trsvcid": "58622" 00:14:26.147 }, 00:14:26.147 "auth": { 00:14:26.147 "state": "completed", 00:14:26.147 "digest": "sha512", 00:14:26.147 "dhgroup": "ffdhe8192" 00:14:26.147 } 00:14:26.147 } 00:14:26.147 ]' 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.147 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.406 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:14:26.406 21:11:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:Y2VlMDViMWE2Yjg5YTM3NjBkZjNlMzkyYWMwZTY5ZWU2NGU0NTg2MjAyZGZkMGM12tEBWA==: --dhchap-ctrl-secret DHHC-1:03:OTc5NDhhMTc2NWRmOTg1NjcxZDVmNTc5ZjBmYTFkNjE5OTFjM2M2YWEzMTM0OTczOWExMzQ0YmYxZTQzM2FlYVbGQYU=: 00:14:26.974 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:26.975 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:14:27.544 request: 00:14:27.544 { 00:14:27.544 "name": "nvme0", 00:14:27.544 "trtype": "rdma", 00:14:27.544 "traddr": "192.168.100.8", 00:14:27.544 "adrfam": "ipv4", 00:14:27.544 "trsvcid": "4420", 00:14:27.544 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:27.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:27.544 "prchk_reftag": false, 00:14:27.544 "prchk_guard": false, 00:14:27.544 "hdgst": false, 00:14:27.544 "ddgst": false, 00:14:27.544 "dhchap_key": "key2", 00:14:27.544 "allow_unrecognized_csi": false, 00:14:27.544 "method": "bdev_nvme_attach_controller", 00:14:27.544 "req_id": 1 00:14:27.544 } 00:14:27.544 Got JSON-RPC error response 00:14:27.544 response: 00:14:27.544 { 00:14:27.544 "code": -5, 00:14:27.544 "message": "Input/output error" 00:14:27.544 } 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:27.544 21:11:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:14:28.112 request: 00:14:28.112 { 00:14:28.112 "name": "nvme0", 00:14:28.112 "trtype": "rdma", 00:14:28.112 "traddr": "192.168.100.8", 00:14:28.112 "adrfam": "ipv4", 00:14:28.112 "trsvcid": "4420", 00:14:28.112 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:28.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:28.112 "prchk_reftag": false, 00:14:28.112 "prchk_guard": false, 00:14:28.112 "hdgst": false, 00:14:28.112 "ddgst": false, 00:14:28.112 "dhchap_key": "key1", 00:14:28.112 "dhchap_ctrlr_key": "ckey2", 00:14:28.112 "allow_unrecognized_csi": false, 00:14:28.112 "method": "bdev_nvme_attach_controller", 00:14:28.112 "req_id": 1 00:14:28.112 } 00:14:28.112 Got JSON-RPC error response 00:14:28.112 response: 00:14:28.112 { 00:14:28.112 "code": -5, 00:14:28.112 "message": "Input/output error" 00:14:28.112 } 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.112 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.113 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.113 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:28.113 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.113 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:28.113 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.113 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:28.113 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:28.113 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.113 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.113 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.372 request: 00:14:28.372 { 00:14:28.372 "name": "nvme0", 00:14:28.372 "trtype": "rdma", 00:14:28.372 "traddr": "192.168.100.8", 00:14:28.372 "adrfam": "ipv4", 00:14:28.372 "trsvcid": "4420", 00:14:28.372 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:28.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:28.372 "prchk_reftag": false, 00:14:28.372 "prchk_guard": false, 00:14:28.372 "hdgst": false, 00:14:28.372 "ddgst": false, 00:14:28.372 "dhchap_key": "key1", 00:14:28.372 "dhchap_ctrlr_key": "ckey1", 00:14:28.372 "allow_unrecognized_csi": false, 00:14:28.372 "method": "bdev_nvme_attach_controller", 00:14:28.372 "req_id": 1 00:14:28.372 } 00:14:28.372 Got JSON-RPC error response 00:14:28.372 response: 00:14:28.372 { 00:14:28.372 "code": -5, 00:14:28.372 "message": "Input/output error" 00:14:28.372 } 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3859145 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3859145 ']' 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3859145 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3859145 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3859145' 00:14:28.372 killing process with pid 3859145 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3859145 00:14:28.372 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3859145 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3883326 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3883326 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3883326 ']' 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.632 21:11:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3883326 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3883326 ']' 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.892 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.152 null0 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8hS 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.9sv ]] 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.9sv 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0op 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.152 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.0Jc ]] 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Jc 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.hgy 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Y9z ]] 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Y9z 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.t83 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.412 21:11:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.980 nvme0n1 00:14:29.980 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.980 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.980 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.238 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.238 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.238 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.238 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.238 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.238 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.238 { 00:14:30.238 "cntlid": 1, 00:14:30.238 "qid": 0, 00:14:30.238 "state": "enabled", 00:14:30.238 "thread": "nvmf_tgt_poll_group_000", 00:14:30.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:30.238 "listen_address": { 00:14:30.238 "trtype": "RDMA", 00:14:30.238 "adrfam": "IPv4", 00:14:30.238 "traddr": "192.168.100.8", 00:14:30.238 "trsvcid": "4420" 00:14:30.238 }, 00:14:30.238 "peer_address": { 00:14:30.238 "trtype": "RDMA", 00:14:30.238 "adrfam": "IPv4", 00:14:30.238 "traddr": "192.168.100.8", 00:14:30.238 "trsvcid": "33158" 00:14:30.238 }, 00:14:30.238 "auth": { 00:14:30.239 "state": "completed", 00:14:30.239 "digest": "sha512", 00:14:30.239 "dhgroup": "ffdhe8192" 00:14:30.239 } 00:14:30.239 } 00:14:30.239 ]' 00:14:30.239 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.239 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:30.239 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.239 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:30.239 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.239 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.239 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.239 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.498 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:30.498 21:11:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:31.065 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key3 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.325 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.584 request: 00:14:31.584 { 00:14:31.584 "name": "nvme0", 00:14:31.584 "trtype": "rdma", 00:14:31.584 "traddr": "192.168.100.8", 00:14:31.584 "adrfam": "ipv4", 00:14:31.584 "trsvcid": "4420", 00:14:31.585 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:31.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:31.585 "prchk_reftag": false, 00:14:31.585 "prchk_guard": false, 00:14:31.585 "hdgst": false, 00:14:31.585 "ddgst": false, 00:14:31.585 "dhchap_key": "key3", 00:14:31.585 "allow_unrecognized_csi": false, 00:14:31.585 "method": "bdev_nvme_attach_controller", 00:14:31.585 "req_id": 1 00:14:31.585 } 00:14:31.585 Got JSON-RPC error response 00:14:31.585 response: 00:14:31.585 { 00:14:31.585 "code": -5, 00:14:31.585 "message": "Input/output error" 00:14:31.585 } 00:14:31.585 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:31.585 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:31.585 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:31.585 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:31.585 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:14:31.585 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:14:31.585 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:31.585 21:11:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:14:31.847 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:14:31.847 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:31.847 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:14:31.847 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:31.847 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.847 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:31.847 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:31.847 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:31.847 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:31.847 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:32.107 request: 00:14:32.107 { 00:14:32.107 "name": "nvme0", 00:14:32.107 "trtype": "rdma", 00:14:32.107 "traddr": "192.168.100.8", 00:14:32.107 "adrfam": "ipv4", 00:14:32.107 "trsvcid": "4420", 00:14:32.107 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:32.107 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:32.107 "prchk_reftag": false, 00:14:32.107 "prchk_guard": false, 00:14:32.107 "hdgst": false, 00:14:32.107 "ddgst": false, 00:14:32.107 "dhchap_key": "key3", 00:14:32.107 "allow_unrecognized_csi": false, 00:14:32.107 "method": "bdev_nvme_attach_controller", 00:14:32.107 "req_id": 1 00:14:32.107 } 00:14:32.107 Got JSON-RPC error response 00:14:32.107 response: 00:14:32.107 { 00:14:32.107 "code": -5, 00:14:32.107 "message": "Input/output error" 00:14:32.107 } 00:14:32.107 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:32.107 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:32.107 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:32.107 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:32.107 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:32.107 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:14:32.107 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:14:32.107 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:32.107 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:32.107 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:32.366 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:32.625 request: 00:14:32.625 { 00:14:32.625 "name": "nvme0", 00:14:32.625 "trtype": "rdma", 00:14:32.625 "traddr": "192.168.100.8", 00:14:32.625 "adrfam": "ipv4", 00:14:32.625 "trsvcid": "4420", 00:14:32.625 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:32.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:32.625 "prchk_reftag": false, 00:14:32.625 "prchk_guard": false, 00:14:32.625 "hdgst": false, 00:14:32.625 "ddgst": false, 00:14:32.625 "dhchap_key": "key0", 00:14:32.625 "dhchap_ctrlr_key": "key1", 00:14:32.625 "allow_unrecognized_csi": false, 00:14:32.625 "method": "bdev_nvme_attach_controller", 00:14:32.625 "req_id": 1 00:14:32.625 } 00:14:32.625 Got JSON-RPC error response 00:14:32.625 response: 00:14:32.625 { 00:14:32.625 "code": -5, 00:14:32.625 "message": "Input/output error" 00:14:32.625 } 00:14:32.625 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:32.625 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:32.625 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:32.625 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:32.625 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:14:32.625 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:32.625 21:11:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:14:32.883 nvme0n1 00:14:32.883 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:14:32.883 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:14:32.883 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.142 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.142 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.142 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.142 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 00:14:33.142 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.142 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.142 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.142 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:33.142 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:33.142 21:11:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:34.078 nvme0n1 00:14:34.078 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:14:34.078 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:14:34.078 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.078 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.078 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:34.078 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.078 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.078 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.078 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:14:34.078 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:34.078 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.338 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.338 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:34.338 21:11:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid 00bafac1-9c9c-e711-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: --dhchap-ctrl-secret DHHC-1:03:YjBjNTk3ZjJjYjQzNzYwNDQ3ZjUwNGRjODM1NGYxMTBiNzgzN2E1MWQwODQ4MmNjZGVmYzA1Njc0MmFmNzVlOUDwh8c=: 00:14:34.905 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:34.906 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:34.906 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:34.906 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:34.906 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:34.906 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:34.906 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:34.906 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.906 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.165 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:35.165 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:35.165 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:35.165 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:14:35.165 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.165 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:14:35.165 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:35.165 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:35.165 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:35.165 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:35.424 request: 00:14:35.424 { 00:14:35.424 "name": "nvme0", 00:14:35.424 "trtype": "rdma", 00:14:35.424 "traddr": "192.168.100.8", 00:14:35.424 "adrfam": "ipv4", 00:14:35.424 "trsvcid": "4420", 00:14:35.424 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:35.424 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562", 00:14:35.424 "prchk_reftag": false, 00:14:35.424 "prchk_guard": false, 00:14:35.424 "hdgst": false, 00:14:35.424 "ddgst": false, 00:14:35.424 "dhchap_key": "key1", 00:14:35.424 "allow_unrecognized_csi": false, 00:14:35.424 "method": "bdev_nvme_attach_controller", 00:14:35.424 "req_id": 1 00:14:35.424 } 00:14:35.424 Got JSON-RPC error response 00:14:35.424 response: 00:14:35.424 { 00:14:35.424 "code": -5, 00:14:35.424 "message": "Input/output error" 00:14:35.424 } 00:14:35.424 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:35.424 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:35.424 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:35.424 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:35.424 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:35.424 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:35.424 21:11:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:36.363 nvme0n1 00:14:36.363 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:36.363 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:36.363 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.363 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.363 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.363 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.623 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:36.623 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.623 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.623 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.623 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:36.623 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:36.623 21:11:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:36.883 nvme0n1 00:14:36.883 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:36.883 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:36.883 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.883 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.883 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.883 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.142 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:37.142 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.142 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.142 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.142 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: '' 2s 00:14:37.142 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:37.143 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:37.143 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: 00:14:37.143 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:37.143 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:37.143 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:37.143 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: ]] 00:14:37.143 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTdmNmQ3ZjQ4M2E3OTcwMWMxYmUyMmY4ZGEzOTI5YzD/aN20: 00:14:37.143 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:37.143 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:37.143 21:11:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: 2s 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: ]] 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODU2YzlmOGQzZjdlMjQ0ODA1Y2ZmYmFkZDI2OGMxZmQwOWM3OTBjOTU5MWNlNzNmXdcktw==: 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:39.223 21:11:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:41.129 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:41.129 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:41.129 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:41.129 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:41.388 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:41.388 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:41.388 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:41.388 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.388 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:41.388 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.388 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.388 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.388 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:41.388 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:41.388 21:11:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:42.331 nvme0n1 00:14:42.331 21:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:42.331 21:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.331 21:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.331 21:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.331 21:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:42.331 21:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:42.591 21:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:42.591 21:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:42.591 21:11:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.850 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.850 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:42.850 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.850 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.850 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.850 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:42.850 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:42.850 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:42.850 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:42.850 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:43.109 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:43.678 request: 00:14:43.678 { 00:14:43.678 "name": "nvme0", 00:14:43.678 "dhchap_key": "key1", 00:14:43.678 "dhchap_ctrlr_key": "key3", 00:14:43.678 "method": "bdev_nvme_set_keys", 00:14:43.678 "req_id": 1 00:14:43.678 } 00:14:43.678 Got JSON-RPC error response 00:14:43.678 response: 00:14:43.678 { 00:14:43.678 "code": -13, 00:14:43.678 "message": "Permission denied" 00:14:43.678 } 00:14:43.678 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:43.678 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:43.678 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:43.678 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:43.678 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:43.678 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:43.678 21:11:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.678 21:11:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:43.678 21:11:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:44.611 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:44.611 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:44.611 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.870 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:44.870 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:44.870 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.870 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.870 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.870 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:44.870 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:44.870 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:45.807 nvme0n1 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:45.807 21:11:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:46.066 request: 00:14:46.066 { 00:14:46.066 "name": "nvme0", 00:14:46.066 "dhchap_key": "key2", 00:14:46.066 "dhchap_ctrlr_key": "key0", 00:14:46.066 "method": "bdev_nvme_set_keys", 00:14:46.066 "req_id": 1 00:14:46.066 } 00:14:46.066 Got JSON-RPC error response 00:14:46.066 response: 00:14:46.066 { 00:14:46.066 "code": -13, 00:14:46.066 "message": "Permission denied" 00:14:46.066 } 00:14:46.066 21:11:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:46.066 21:11:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:46.066 21:11:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:46.066 21:11:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:46.066 21:11:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:46.066 21:11:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:46.066 21:11:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.325 21:11:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:46.325 21:11:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:47.261 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:47.261 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:47.261 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.520 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:47.520 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:47.520 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:47.520 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3859170 00:14:47.520 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3859170 ']' 00:14:47.520 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3859170 00:14:47.520 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:47.520 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.520 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3859170 00:14:47.520 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:47.521 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:47.521 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3859170' 00:14:47.521 killing process with pid 3859170 00:14:47.521 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3859170 00:14:47.521 21:11:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3859170 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:14:47.780 rmmod nvme_rdma 00:14:47.780 rmmod nvme_fabrics 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3883326 ']' 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3883326 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3883326 ']' 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3883326 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3883326 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3883326' 00:14:47.780 killing process with pid 3883326 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3883326 00:14:47.780 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3883326 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.8hS /tmp/spdk.key-sha256.0op /tmp/spdk.key-sha384.hgy /tmp/spdk.key-sha512.t83 /tmp/spdk.key-sha512.9sv /tmp/spdk.key-sha384.0Jc /tmp/spdk.key-sha256.Y9z '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:14:48.041 00:14:48.041 real 2m32.256s 00:14:48.041 user 5m52.099s 00:14:48.041 sys 0m19.799s 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.041 ************************************ 00:14:48.041 END TEST nvmf_auth_target 00:14:48.041 ************************************ 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.041 ************************************ 00:14:48.041 START TEST nvmf_srq_overwhelm 00:14:48.041 ************************************ 00:14:48.041 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:14:48.301 * Looking for test storage... 00:14:48.301 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lcov --version 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:48.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.301 --rc genhtml_branch_coverage=1 00:14:48.301 --rc genhtml_function_coverage=1 00:14:48.301 --rc genhtml_legend=1 00:14:48.301 --rc geninfo_all_blocks=1 00:14:48.301 --rc geninfo_unexecuted_blocks=1 00:14:48.301 00:14:48.301 ' 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:48.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.301 --rc genhtml_branch_coverage=1 00:14:48.301 --rc genhtml_function_coverage=1 00:14:48.301 --rc genhtml_legend=1 00:14:48.301 --rc geninfo_all_blocks=1 00:14:48.301 --rc geninfo_unexecuted_blocks=1 00:14:48.301 00:14:48.301 ' 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:48.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.301 --rc genhtml_branch_coverage=1 00:14:48.301 --rc genhtml_function_coverage=1 00:14:48.301 --rc genhtml_legend=1 00:14:48.301 --rc geninfo_all_blocks=1 00:14:48.301 --rc geninfo_unexecuted_blocks=1 00:14:48.301 00:14:48.301 ' 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:48.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:48.301 --rc genhtml_branch_coverage=1 00:14:48.301 --rc genhtml_function_coverage=1 00:14:48.301 --rc genhtml_legend=1 00:14:48.301 --rc geninfo_all_blocks=1 00:14:48.301 --rc geninfo_unexecuted_blocks=1 00:14:48.301 00:14:48.301 ' 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.301 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:48.302 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:14:48.302 21:11:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:14:54.872 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:14:54.872 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:14:54.872 Found net devices under 0000:18:00.0: mlx_0_0 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:14:54.872 Found net devices under 0000:18:00.1: mlx_0_1 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:54.872 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:14:54.873 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:54.873 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:14:54.873 altname enp24s0f0np0 00:14:54.873 altname ens785f0np0 00:14:54.873 inet 192.168.100.8/24 scope global mlx_0_0 00:14:54.873 valid_lft forever preferred_lft forever 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:14:54.873 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:54.873 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:14:54.873 altname enp24s0f1np1 00:14:54.873 altname ens785f1np1 00:14:54.873 inet 192.168.100.9/24 scope global mlx_0_1 00:14:54.873 valid_lft forever preferred_lft forever 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:14:54.873 192.168.100.9' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:14:54.873 192.168.100.9' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:14:54.873 192.168.100.9' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=3890369 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 3890369 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 3890369 ']' 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.873 [2024-11-26 21:11:41.369558] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:14:54.873 [2024-11-26 21:11:41.369603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.873 [2024-11-26 21:11:41.427662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.873 [2024-11-26 21:11:41.467778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.873 [2024-11-26 21:11:41.467812] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.873 [2024-11-26 21:11:41.467818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.873 [2024-11-26 21:11:41.467824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.873 [2024-11-26 21:11:41.467828] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.873 [2024-11-26 21:11:41.469062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.873 [2024-11-26 21:11:41.469083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.873 [2024-11-26 21:11:41.469176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.873 [2024-11-26 21:11:41.469177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:54.873 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:54.874 [2024-11-26 21:11:41.630467] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x109d530/0x10a1a20) succeed. 00:14:54.874 [2024-11-26 21:11:41.638666] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x109ebc0/0x10e30c0) succeed. 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:54.874 Malloc0 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:54.874 [2024-11-26 21:11:41.731695] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.874 21:11:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:55.450 Malloc1 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:55.450 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.451 21:11:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:56.390 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:14:56.390 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:14:56.390 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:56.390 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:14:56.390 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:56.390 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:14:56.390 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:14:56.390 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:56.391 Malloc2 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.391 21:11:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:57.771 Malloc3 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.771 21:11:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:58.711 Malloc4 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.711 21:11:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.649 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:59.649 Malloc5 00:14:59.650 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.650 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:14:59.650 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.650 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:59.650 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.650 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:14:59.650 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.650 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:14:59.650 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.650 21:11:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:15:00.588 21:11:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:15:00.588 21:11:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:15:00.588 21:11:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:00.588 21:11:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:15:00.588 21:11:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:00.588 21:11:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:15:00.588 21:11:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:15:00.588 21:11:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:15:00.588 [global] 00:15:00.588 thread=1 00:15:00.588 invalidate=1 00:15:00.588 rw=read 00:15:00.588 time_based=1 00:15:00.588 runtime=10 00:15:00.588 ioengine=libaio 00:15:00.588 direct=1 00:15:00.588 bs=1048576 00:15:00.588 iodepth=128 00:15:00.588 norandommap=1 00:15:00.588 numjobs=13 00:15:00.588 00:15:00.588 [job0] 00:15:00.588 filename=/dev/nvme0n1 00:15:00.588 [job1] 00:15:00.588 filename=/dev/nvme1n1 00:15:00.588 [job2] 00:15:00.588 filename=/dev/nvme2n1 00:15:00.588 [job3] 00:15:00.588 filename=/dev/nvme3n1 00:15:00.588 [job4] 00:15:00.588 filename=/dev/nvme4n1 00:15:00.588 [job5] 00:15:00.588 filename=/dev/nvme5n1 00:15:00.875 Could not set queue depth (nvme0n1) 00:15:00.875 Could not set queue depth (nvme1n1) 00:15:00.875 Could not set queue depth (nvme2n1) 00:15:00.875 Could not set queue depth (nvme3n1) 00:15:00.875 Could not set queue depth (nvme4n1) 00:15:00.875 Could not set queue depth (nvme5n1) 00:15:01.134 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:01.134 ... 00:15:01.134 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:01.134 ... 00:15:01.134 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:01.134 ... 00:15:01.134 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:01.134 ... 00:15:01.134 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:01.134 ... 00:15:01.134 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:15:01.134 ... 00:15:01.134 fio-3.35 00:15:01.134 Starting 78 threads 00:15:16.012 00:15:16.012 job0: (groupid=0, jobs=1): err= 0: pid=3891714: Tue Nov 26 21:12:01 2024 00:15:16.012 read: IOPS=5, BW=5224KiB/s (5349kB/s)(66.0MiB/12937msec) 00:15:16.012 slat (usec): min=669, max=2150.1k, avg=164214.59, stdev=553063.26 00:15:16.012 clat (msec): min=2098, max=12933, avg=8145.49, stdev=4219.37 00:15:16.012 lat (msec): min=4030, max=12936, avg=8309.70, stdev=4191.22 00:15:16.012 clat percentiles (msec): 00:15:16.012 | 1.00th=[ 2106], 5.00th=[ 4144], 10.00th=[ 4144], 20.00th=[ 4144], 00:15:16.012 | 30.00th=[ 4245], 40.00th=[ 4245], 50.00th=[ 4245], 60.00th=[12818], 00:15:16.012 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:15:16.012 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:15:16.012 | 99.99th=[12953] 00:15:16.012 lat (msec) : >=2000=100.00% 00:15:16.012 cpu : usr=0.00%, sys=0.43%, ctx=65, majf=0, minf=16897 00:15:16.012 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.1%, 16=24.2%, 32=48.5%, >=64=4.5% 00:15:16.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.012 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:16.012 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.012 job0: (groupid=0, jobs=1): err= 0: pid=3891715: Tue Nov 26 21:12:01 2024 00:15:16.012 read: IOPS=13, BW=13.2MiB/s (13.9MB/s)(170MiB/12868msec) 00:15:16.012 slat (usec): min=121, max=2186.4k, avg=63343.10, stdev=327809.57 00:15:16.012 clat (msec): min=1106, max=8126, avg=6276.55, stdev=2360.01 00:15:16.012 lat (msec): min=1117, max=8135, avg=6339.90, stdev=2305.92 00:15:16.012 clat percentiles (msec): 00:15:16.012 | 1.00th=[ 1116], 5.00th=[ 1150], 10.00th=[ 1217], 20.00th=[ 5604], 00:15:16.012 | 30.00th=[ 7080], 40.00th=[ 7215], 50.00th=[ 7282], 60.00th=[ 7416], 00:15:16.012 | 70.00th=[ 7617], 80.00th=[ 7819], 90.00th=[ 7953], 95.00th=[ 8020], 00:15:16.012 | 99.00th=[ 8087], 99.50th=[ 8154], 99.90th=[ 8154], 99.95th=[ 8154], 00:15:16.012 | 99.99th=[ 8154] 00:15:16.012 bw ( KiB/s): min= 2048, max=81920, per=0.75%, avg=22017.00, stdev=39935.33, samples=4 00:15:16.012 iops : min= 2, max= 80, avg=21.50, stdev=39.00, samples=4 00:15:16.012 lat (msec) : 2000=15.88%, >=2000=84.12% 00:15:16.012 cpu : usr=0.00%, sys=0.70%, ctx=336, majf=0, minf=32769 00:15:16.012 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.7%, 16=9.4%, 32=18.8%, >=64=62.9% 00:15:16.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.012 complete : 0=0.0%, 4=97.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.3% 00:15:16.012 issued rwts: total=170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.012 job0: (groupid=0, jobs=1): err= 0: pid=3891716: Tue Nov 26 21:12:01 2024 00:15:16.012 read: IOPS=2, BW=2391KiB/s (2448kB/s)(30.0MiB/12849msec) 00:15:16.012 slat (usec): min=777, max=2164.0k, avg=358533.31, stdev=790390.04 00:15:16.012 clat (msec): min=2092, max=12847, avg=10392.49, stdev=3321.66 00:15:16.012 lat (msec): min=4197, max=12848, avg=10751.02, stdev=2955.17 00:15:16.012 clat percentiles (msec): 00:15:16.012 | 1.00th=[ 2089], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 6409], 00:15:16.013 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12684], 60.00th=[12818], 00:15:16.013 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:15:16.013 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:15:16.013 | 99.99th=[12818] 00:15:16.013 lat (msec) : >=2000=100.00% 00:15:16.013 cpu : usr=0.00%, sys=0.21%, ctx=37, majf=0, minf=7681 00:15:16.013 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:15:16.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.013 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:15:16.013 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.013 job0: (groupid=0, jobs=1): err= 0: pid=3891717: Tue Nov 26 21:12:01 2024 00:15:16.013 read: IOPS=1, BW=1116KiB/s (1142kB/s)(14.0MiB/12850msec) 00:15:16.013 slat (msec): min=3, max=2139, avg=768.35, stdev=1033.24 00:15:16.013 clat (msec): min=2092, max=12773, avg=9104.90, stdev=3686.41 00:15:16.013 lat (msec): min=4196, max=12849, avg=9873.26, stdev=3201.53 00:15:16.013 clat percentiles (msec): 00:15:16.013 | 1.00th=[ 2089], 5.00th=[ 2089], 10.00th=[ 4212], 20.00th=[ 4212], 00:15:16.013 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[ 8557], 60.00th=[10671], 00:15:16.013 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12818], 00:15:16.013 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:15:16.013 | 99.99th=[12818] 00:15:16.013 lat (msec) : >=2000=100.00% 00:15:16.013 cpu : usr=0.00%, sys=0.09%, ctx=34, majf=0, minf=3585 00:15:16.013 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:16.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.013 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.013 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.013 job0: (groupid=0, jobs=1): err= 0: pid=3891718: Tue Nov 26 21:12:01 2024 00:15:16.013 read: IOPS=86, BW=86.4MiB/s (90.6MB/s)(1114MiB/12899msec) 00:15:16.013 slat (usec): min=36, max=2110.3k, avg=9694.95, stdev=108501.98 00:15:16.013 clat (msec): min=227, max=5583, avg=924.24, stdev=1335.89 00:15:16.013 lat (msec): min=227, max=5652, avg=933.94, stdev=1346.50 00:15:16.013 clat percentiles (msec): 00:15:16.013 | 1.00th=[ 230], 5.00th=[ 232], 10.00th=[ 234], 20.00th=[ 239], 00:15:16.013 | 30.00th=[ 268], 40.00th=[ 317], 50.00th=[ 351], 60.00th=[ 485], 00:15:16.013 | 70.00th=[ 651], 80.00th=[ 827], 90.00th=[ 4329], 95.00th=[ 4463], 00:15:16.013 | 99.00th=[ 4530], 99.50th=[ 5604], 99.90th=[ 5604], 99.95th=[ 5604], 00:15:16.013 | 99.99th=[ 5604] 00:15:16.013 bw ( KiB/s): min= 2048, max=503808, per=8.63%, avg=252672.00, stdev=186575.06, samples=8 00:15:16.013 iops : min= 2, max= 492, avg=246.75, stdev=182.20, samples=8 00:15:16.013 lat (msec) : 250=26.57%, 500=34.83%, 750=14.45%, 1000=7.54%, 2000=4.40% 00:15:16.013 lat (msec) : >=2000=12.21% 00:15:16.013 cpu : usr=0.05%, sys=1.17%, ctx=1263, majf=0, minf=32769 00:15:16.013 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:15:16.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.013 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.013 issued rwts: total=1114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.013 job0: (groupid=0, jobs=1): err= 0: pid=3891719: Tue Nov 26 21:12:01 2024 00:15:16.013 read: IOPS=1, BW=1675KiB/s (1715kB/s)(21.0MiB/12838msec) 00:15:16.013 slat (usec): min=683, max=4202.1k, avg=511963.43, stdev=1136473.41 00:15:16.013 clat (msec): min=2086, max=12836, avg=11181.43, stdev=3255.37 00:15:16.013 lat (msec): min=4218, max=12837, avg=11693.40, stdev=2514.58 00:15:16.013 clat percentiles (msec): 00:15:16.013 | 1.00th=[ 2089], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8490], 00:15:16.013 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:15:16.013 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:15:16.013 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:15:16.013 | 99.99th=[12818] 00:15:16.013 lat (msec) : >=2000=100.00% 00:15:16.013 cpu : usr=0.00%, sys=0.13%, ctx=15, majf=0, minf=5377 00:15:16.013 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:15:16.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.013 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:15:16.013 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.013 job0: (groupid=0, jobs=1): err= 0: pid=3891720: Tue Nov 26 21:12:01 2024 00:15:16.013 read: IOPS=2, BW=2858KiB/s (2927kB/s)(36.0MiB/12897msec) 00:15:16.013 slat (usec): min=767, max=2148.5k, avg=299958.20, stdev=743318.73 00:15:16.013 clat (msec): min=2098, max=12894, avg=11699.21, stdev=2542.10 00:15:16.013 lat (msec): min=4246, max=12896, avg=11999.17, stdev=1943.47 00:15:16.013 clat percentiles (msec): 00:15:16.013 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 8557], 20.00th=[10671], 00:15:16.013 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:15:16.013 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:15:16.013 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:15:16.013 | 99.99th=[12953] 00:15:16.013 lat (msec) : >=2000=100.00% 00:15:16.013 cpu : usr=0.00%, sys=0.22%, ctx=55, majf=0, minf=9217 00:15:16.013 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:15:16.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.013 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.013 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.013 job0: (groupid=0, jobs=1): err= 0: pid=3891721: Tue Nov 26 21:12:01 2024 00:15:16.013 read: IOPS=48, BW=48.8MiB/s (51.2MB/s)(529MiB/10829msec) 00:15:16.013 slat (usec): min=27, max=2145.7k, avg=20342.65, stdev=181811.37 00:15:16.013 clat (msec): min=64, max=4883, avg=1506.64, stdev=1751.89 00:15:16.013 lat (msec): min=313, max=4928, avg=1526.99, stdev=1760.79 00:15:16.013 clat percentiles (msec): 00:15:16.013 | 1.00th=[ 334], 5.00th=[ 363], 10.00th=[ 388], 20.00th=[ 435], 00:15:16.013 | 30.00th=[ 464], 40.00th=[ 477], 50.00th=[ 542], 60.00th=[ 567], 00:15:16.013 | 70.00th=[ 592], 80.00th=[ 4463], 90.00th=[ 4665], 95.00th=[ 4732], 00:15:16.013 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:15:16.013 | 99.99th=[ 4866] 00:15:16.013 bw ( KiB/s): min= 2048, max=286720, per=4.68%, avg=136869.83, stdev=128030.40, samples=6 00:15:16.013 iops : min= 2, max= 280, avg=133.50, stdev=125.22, samples=6 00:15:16.013 lat (msec) : 100=0.19%, 500=45.75%, 750=27.60%, >=2000=26.47% 00:15:16.013 cpu : usr=0.02%, sys=0.85%, ctx=547, majf=0, minf=32769 00:15:16.013 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.1% 00:15:16.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.013 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:16.013 issued rwts: total=529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.013 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.013 job0: (groupid=0, jobs=1): err= 0: pid=3891722: Tue Nov 26 21:12:01 2024 00:15:16.013 read: IOPS=5, BW=5695KiB/s (5832kB/s)(72.0MiB/12945msec) 00:15:16.013 slat (usec): min=760, max=2085.3k, avg=150389.51, stdev=526895.24 00:15:16.013 clat (msec): min=2116, max=12944, avg=10589.06, stdev=3197.84 00:15:16.013 lat (msec): min=4201, max=12944, avg=10739.45, stdev=3044.73 00:15:16.013 clat percentiles (msec): 00:15:16.013 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:15:16.013 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:15:16.013 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:15:16.013 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:15:16.013 | 99.99th=[12953] 00:15:16.013 lat (msec) : >=2000=100.00% 00:15:16.014 cpu : usr=0.03%, sys=0.45%, ctx=79, majf=0, minf=18433 00:15:16.014 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.1%, 16=22.2%, 32=44.4%, >=64=12.5% 00:15:16.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.014 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:16.014 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.014 job0: (groupid=0, jobs=1): err= 0: pid=3891723: Tue Nov 26 21:12:01 2024 00:15:16.014 read: IOPS=35, BW=35.9MiB/s (37.6MB/s)(464MiB/12937msec) 00:15:16.014 slat (usec): min=48, max=2132.6k, avg=23371.01, stdev=186808.53 00:15:16.014 clat (msec): min=302, max=8578, avg=3423.26, stdev=2596.32 00:15:16.014 lat (msec): min=303, max=10674, avg=3446.63, stdev=2616.76 00:15:16.014 clat percentiles (msec): 00:15:16.014 | 1.00th=[ 305], 5.00th=[ 317], 10.00th=[ 347], 20.00th=[ 384], 00:15:16.014 | 30.00th=[ 785], 40.00th=[ 827], 50.00th=[ 5470], 60.00th=[ 5671], 00:15:16.014 | 70.00th=[ 5805], 80.00th=[ 5873], 90.00th=[ 5940], 95.00th=[ 6007], 00:15:16.014 | 99.00th=[ 6074], 99.50th=[ 6409], 99.90th=[ 8557], 99.95th=[ 8557], 00:15:16.014 | 99.99th=[ 8557] 00:15:16.014 bw ( KiB/s): min= 2048, max=323584, per=2.95%, avg=86272.00, stdev=115138.71, samples=8 00:15:16.014 iops : min= 2, max= 316, avg=84.25, stdev=112.44, samples=8 00:15:16.014 lat (msec) : 500=22.41%, 1000=21.77%, 2000=0.86%, >=2000=54.96% 00:15:16.014 cpu : usr=0.00%, sys=0.85%, ctx=435, majf=0, minf=32769 00:15:16.014 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.9%, >=64=86.4% 00:15:16.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.014 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:15:16.014 issued rwts: total=464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.014 job0: (groupid=0, jobs=1): err= 0: pid=3891724: Tue Nov 26 21:12:01 2024 00:15:16.014 read: IOPS=0, BW=795KiB/s (814kB/s)(10.0MiB/12877msec) 00:15:16.014 slat (msec): min=3, max=4165, avg=1077.09, stdev=1473.11 00:15:16.014 clat (msec): min=2105, max=12781, avg=8927.35, stdev=4346.32 00:15:16.014 lat (msec): min=4223, max=12876, avg=10004.44, stdev=3763.47 00:15:16.014 clat percentiles (msec): 00:15:16.014 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 2106], 20.00th=[ 4212], 00:15:16.014 | 30.00th=[ 4245], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[12684], 00:15:16.014 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12818], 00:15:16.014 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:15:16.014 | 99.99th=[12818] 00:15:16.014 lat (msec) : >=2000=100.00% 00:15:16.014 cpu : usr=0.02%, sys=0.03%, ctx=42, majf=0, minf=2561 00:15:16.014 IO depths : 1=10.0%, 2=20.0%, 4=40.0%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:16.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.014 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.014 issued rwts: total=10,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.014 job0: (groupid=0, jobs=1): err= 0: pid=3891726: Tue Nov 26 21:12:01 2024 00:15:16.014 read: IOPS=158, BW=158MiB/s (166MB/s)(1593MiB/10064msec) 00:15:16.014 slat (usec): min=48, max=2041.9k, avg=6272.64, stdev=52297.58 00:15:16.014 clat (msec): min=60, max=2837, avg=774.90, stdev=613.46 00:15:16.014 lat (msec): min=64, max=2847, avg=781.17, stdev=615.73 00:15:16.014 clat percentiles (msec): 00:15:16.014 | 1.00th=[ 165], 5.00th=[ 380], 10.00th=[ 405], 20.00th=[ 472], 00:15:16.014 | 30.00th=[ 542], 40.00th=[ 584], 50.00th=[ 634], 60.00th=[ 651], 00:15:16.014 | 70.00th=[ 684], 80.00th=[ 785], 90.00th=[ 911], 95.00th=[ 2802], 00:15:16.014 | 99.00th=[ 2836], 99.50th=[ 2836], 99.90th=[ 2836], 99.95th=[ 2836], 00:15:16.014 | 99.99th=[ 2836] 00:15:16.014 bw ( KiB/s): min=55296, max=325632, per=6.41%, avg=187674.38, stdev=76223.90, samples=16 00:15:16.014 iops : min= 54, max= 318, avg=183.25, stdev=74.43, samples=16 00:15:16.014 lat (msec) : 100=0.13%, 250=1.32%, 500=23.10%, 750=54.05%, 1000=13.37% 00:15:16.014 lat (msec) : 2000=0.06%, >=2000=7.97% 00:15:16.014 cpu : usr=0.16%, sys=2.53%, ctx=1378, majf=0, minf=32769 00:15:16.014 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:15:16.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.014 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.014 issued rwts: total=1593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.014 job0: (groupid=0, jobs=1): err= 0: pid=3891727: Tue Nov 26 21:12:01 2024 00:15:16.014 read: IOPS=7, BW=7950KiB/s (8140kB/s)(100MiB/12881msec) 00:15:16.014 slat (usec): min=352, max=2095.9k, avg=107618.35, stdev=444695.48 00:15:16.014 clat (msec): min=2118, max=12877, avg=11620.10, stdev=2570.02 00:15:16.014 lat (msec): min=4208, max=12880, avg=11727.71, stdev=2386.91 00:15:16.014 clat percentiles (msec): 00:15:16.014 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[12550], 00:15:16.014 | 30.00th=[12550], 40.00th=[12684], 50.00th=[12684], 60.00th=[12684], 00:15:16.014 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:15:16.014 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:15:16.014 | 99.99th=[12818] 00:15:16.014 lat (msec) : >=2000=100.00% 00:15:16.014 cpu : usr=0.00%, sys=0.49%, ctx=81, majf=0, minf=25601 00:15:16.014 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.0%, 16=16.0%, 32=32.0%, >=64=37.0% 00:15:16.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.014 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:16.014 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.014 job1: (groupid=0, jobs=1): err= 0: pid=3891736: Tue Nov 26 21:12:01 2024 00:15:16.014 read: IOPS=9, BW=9384KiB/s (9609kB/s)(100MiB/10912msec) 00:15:16.014 slat (usec): min=754, max=2098.5k, avg=108298.61, stdev=453859.38 00:15:16.014 clat (msec): min=81, max=10910, avg=8583.69, stdev=3040.39 00:15:16.014 lat (msec): min=2144, max=10911, avg=8691.99, stdev=2925.17 00:15:16.014 clat percentiles (msec): 00:15:16.014 | 1.00th=[ 82], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6477], 00:15:16.014 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10805], 60.00th=[10805], 00:15:16.014 | 70.00th=[10805], 80.00th=[10939], 90.00th=[10939], 95.00th=[10939], 00:15:16.014 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:15:16.014 | 99.99th=[10939] 00:15:16.014 lat (msec) : 100=1.00%, >=2000=99.00% 00:15:16.014 cpu : usr=0.00%, sys=0.79%, ctx=92, majf=0, minf=25601 00:15:16.014 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.0%, 16=16.0%, 32=32.0%, >=64=37.0% 00:15:16.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.014 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:16.014 issued rwts: total=100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.014 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.014 job1: (groupid=0, jobs=1): err= 0: pid=3891737: Tue Nov 26 21:12:01 2024 00:15:16.014 read: IOPS=77, BW=77.6MiB/s (81.4MB/s)(1002MiB/12906msec) 00:15:16.014 slat (usec): min=77, max=2207.6k, avg=10784.78, stdev=96449.36 00:15:16.014 clat (msec): min=375, max=6900, avg=1549.82, stdev=1974.54 00:15:16.014 lat (msec): min=377, max=6903, avg=1560.61, stdev=1980.10 00:15:16.014 clat percentiles (msec): 00:15:16.014 | 1.00th=[ 380], 5.00th=[ 384], 10.00th=[ 397], 20.00th=[ 451], 00:15:16.014 | 30.00th=[ 477], 40.00th=[ 485], 50.00th=[ 751], 60.00th=[ 1167], 00:15:16.015 | 70.00th=[ 1250], 80.00th=[ 1552], 90.00th=[ 6544], 95.00th=[ 6678], 00:15:16.015 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:15:16.015 | 99.99th=[ 6879] 00:15:16.015 bw ( KiB/s): min= 2048, max=329728, per=5.10%, avg=149315.25, stdev=110144.86, samples=12 00:15:16.015 iops : min= 2, max= 322, avg=145.75, stdev=107.59, samples=12 00:15:16.015 lat (msec) : 500=45.61%, 750=4.29%, 1000=3.69%, 2000=33.33%, >=2000=13.07% 00:15:16.015 cpu : usr=0.03%, sys=1.36%, ctx=1820, majf=0, minf=32769 00:15:16.015 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:15:16.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.015 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.015 issued rwts: total=1002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.015 job1: (groupid=0, jobs=1): err= 0: pid=3891738: Tue Nov 26 21:12:01 2024 00:15:16.015 read: IOPS=148, BW=148MiB/s (155MB/s)(1487MiB/10030msec) 00:15:16.015 slat (usec): min=35, max=2052.6k, avg=6725.57, stdev=54307.72 00:15:16.015 clat (msec): min=23, max=2679, avg=820.19, stdev=566.37 00:15:16.015 lat (msec): min=30, max=2698, avg=826.92, stdev=568.02 00:15:16.015 clat percentiles (msec): 00:15:16.015 | 1.00th=[ 121], 5.00th=[ 347], 10.00th=[ 384], 20.00th=[ 567], 00:15:16.015 | 30.00th=[ 600], 40.00th=[ 625], 50.00th=[ 684], 60.00th=[ 768], 00:15:16.015 | 70.00th=[ 793], 80.00th=[ 869], 90.00th=[ 944], 95.00th=[ 2567], 00:15:16.015 | 99.00th=[ 2668], 99.50th=[ 2668], 99.90th=[ 2668], 99.95th=[ 2668], 00:15:16.015 | 99.99th=[ 2668] 00:15:16.015 bw ( KiB/s): min=34816, max=356352, per=6.01%, avg=175854.93, stdev=82003.58, samples=15 00:15:16.015 iops : min= 34, max= 348, avg=171.73, stdev=80.08, samples=15 00:15:16.015 lat (msec) : 50=0.34%, 100=0.54%, 250=1.34%, 500=12.98%, 750=40.62% 00:15:16.015 lat (msec) : 1000=35.64%, >=2000=8.54% 00:15:16.015 cpu : usr=0.03%, sys=1.39%, ctx=1489, majf=0, minf=32769 00:15:16.015 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.8% 00:15:16.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.015 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.015 issued rwts: total=1487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.015 job1: (groupid=0, jobs=1): err= 0: pid=3891739: Tue Nov 26 21:12:01 2024 00:15:16.015 read: IOPS=70, BW=70.9MiB/s (74.4MB/s)(912MiB/12857msec) 00:15:16.015 slat (usec): min=54, max=2199.8k, avg=11789.35, stdev=100963.64 00:15:16.015 clat (msec): min=449, max=6884, avg=1671.06, stdev=2028.87 00:15:16.015 lat (msec): min=454, max=6890, avg=1682.85, stdev=2034.17 00:15:16.015 clat percentiles (msec): 00:15:16.015 | 1.00th=[ 456], 5.00th=[ 460], 10.00th=[ 468], 20.00th=[ 477], 00:15:16.015 | 30.00th=[ 527], 40.00th=[ 592], 50.00th=[ 936], 60.00th=[ 1217], 00:15:16.015 | 70.00th=[ 1284], 80.00th=[ 1552], 90.00th=[ 6544], 95.00th=[ 6745], 00:15:16.015 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:15:16.015 | 99.99th=[ 6879] 00:15:16.015 bw ( KiB/s): min= 2052, max=276480, per=4.58%, avg=133973.67, stdev=94055.42, samples=12 00:15:16.015 iops : min= 2, max= 270, avg=130.83, stdev=91.85, samples=12 00:15:16.015 lat (msec) : 500=26.21%, 750=19.85%, 1000=5.04%, 2000=34.65%, >=2000=14.25% 00:15:16.015 cpu : usr=0.01%, sys=1.31%, ctx=1809, majf=0, minf=32769 00:15:16.015 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.1% 00:15:16.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.015 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.015 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.015 job1: (groupid=0, jobs=1): err= 0: pid=3891740: Tue Nov 26 21:12:01 2024 00:15:16.015 read: IOPS=20, BW=20.5MiB/s (21.5MB/s)(265MiB/12942msec) 00:15:16.015 slat (usec): min=45, max=2077.8k, avg=40860.38, stdev=256340.13 00:15:16.015 clat (msec): min=727, max=12137, avg=6039.98, stdev=5165.92 00:15:16.015 lat (msec): min=728, max=12141, avg=6080.84, stdev=5171.13 00:15:16.015 clat percentiles (msec): 00:15:16.015 | 1.00th=[ 735], 5.00th=[ 768], 10.00th=[ 776], 20.00th=[ 785], 00:15:16.015 | 30.00th=[ 802], 40.00th=[ 818], 50.00th=[ 4245], 60.00th=[10671], 00:15:16.015 | 70.00th=[11610], 80.00th=[11879], 90.00th=[12013], 95.00th=[12013], 00:15:16.015 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:15:16.015 | 99.99th=[12147] 00:15:16.015 bw ( KiB/s): min= 2048, max=145408, per=1.38%, avg=40374.86, stdev=55883.44, samples=7 00:15:16.015 iops : min= 2, max= 142, avg=39.43, stdev=54.57, samples=7 00:15:16.015 lat (msec) : 750=1.51%, 1000=42.26%, 2000=1.89%, >=2000=54.34% 00:15:16.015 cpu : usr=0.02%, sys=0.88%, ctx=250, majf=0, minf=32769 00:15:16.015 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.0%, 16=6.0%, 32=12.1%, >=64=76.2% 00:15:16.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.015 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:15:16.015 issued rwts: total=265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.015 job1: (groupid=0, jobs=1): err= 0: pid=3891741: Tue Nov 26 21:12:01 2024 00:15:16.015 read: IOPS=75, BW=76.0MiB/s (79.6MB/s)(826MiB/10875msec) 00:15:16.015 slat (usec): min=39, max=2121.2k, avg=13068.01, stdev=127447.43 00:15:16.015 clat (msec): min=77, max=7424, avg=1595.29, stdev=2324.44 00:15:16.015 lat (msec): min=434, max=7426, avg=1608.35, stdev=2330.93 00:15:16.015 clat percentiles (msec): 00:15:16.015 | 1.00th=[ 451], 5.00th=[ 468], 10.00th=[ 472], 20.00th=[ 510], 00:15:16.015 | 30.00th=[ 567], 40.00th=[ 592], 50.00th=[ 617], 60.00th=[ 642], 00:15:16.015 | 70.00th=[ 684], 80.00th=[ 760], 90.00th=[ 7013], 95.00th=[ 7215], 00:15:16.015 | 99.00th=[ 7349], 99.50th=[ 7416], 99.90th=[ 7416], 99.95th=[ 7416], 00:15:16.015 | 99.99th=[ 7416] 00:15:16.015 bw ( KiB/s): min= 4096, max=278528, per=4.88%, avg=142950.40, stdev=104100.74, samples=10 00:15:16.015 iops : min= 4, max= 272, avg=139.60, stdev=101.66, samples=10 00:15:16.015 lat (msec) : 100=0.12%, 500=18.77%, 750=60.65%, 1000=4.12%, 2000=0.36% 00:15:16.015 lat (msec) : >=2000=15.98% 00:15:16.015 cpu : usr=0.03%, sys=1.01%, ctx=1127, majf=0, minf=32769 00:15:16.015 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:15:16.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.015 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.015 issued rwts: total=826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.015 job1: (groupid=0, jobs=1): err= 0: pid=3891742: Tue Nov 26 21:12:01 2024 00:15:16.015 read: IOPS=97, BW=97.8MiB/s (103MB/s)(1265MiB/12928msec) 00:15:16.015 slat (usec): min=63, max=2081.8k, avg=8542.70, stdev=116232.40 00:15:16.015 clat (msec): min=121, max=10927, avg=1269.24, stdev=3051.79 00:15:16.015 lat (msec): min=122, max=10927, avg=1277.79, stdev=3063.31 00:15:16.015 clat percentiles (msec): 00:15:16.015 | 1.00th=[ 123], 5.00th=[ 124], 10.00th=[ 124], 20.00th=[ 125], 00:15:16.015 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 211], 00:15:16.015 | 70.00th=[ 368], 80.00th=[ 393], 90.00th=[ 6342], 95.00th=[10805], 00:15:16.015 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:15:16.015 | 99.99th=[10939] 00:15:16.015 bw ( KiB/s): min= 2048, max=972800, per=8.85%, avg=258958.22, stdev=370147.48, samples=9 00:15:16.015 iops : min= 2, max= 950, avg=252.89, stdev=361.47, samples=9 00:15:16.015 lat (msec) : 250=63.79%, 500=23.16%, 750=1.19%, >=2000=11.86% 00:15:16.015 cpu : usr=0.02%, sys=1.07%, ctx=1215, majf=0, minf=32770 00:15:16.015 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:15:16.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.015 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.016 issued rwts: total=1265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.016 job1: (groupid=0, jobs=1): err= 0: pid=3891743: Tue Nov 26 21:12:01 2024 00:15:16.016 read: IOPS=2, BW=2753KiB/s (2820kB/s)(29.0MiB/10785msec) 00:15:16.016 slat (usec): min=738, max=2134.7k, avg=369370.68, stdev=803771.42 00:15:16.016 clat (msec): min=72, max=10782, avg=8337.79, stdev=3122.82 00:15:16.016 lat (msec): min=2172, max=10784, avg=8707.16, stdev=2717.45 00:15:16.016 clat percentiles (msec): 00:15:16.016 | 1.00th=[ 72], 5.00th=[ 2165], 10.00th=[ 4329], 20.00th=[ 4396], 00:15:16.016 | 30.00th=[ 6544], 40.00th=[ 8658], 50.00th=[10805], 60.00th=[10805], 00:15:16.016 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:15:16.016 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.016 | 99.99th=[10805] 00:15:16.016 lat (msec) : 100=3.45%, >=2000=96.55% 00:15:16.016 cpu : usr=0.00%, sys=0.21%, ctx=37, majf=0, minf=7425 00:15:16.016 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:15:16.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.016 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:15:16.016 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.016 job1: (groupid=0, jobs=1): err= 0: pid=3891744: Tue Nov 26 21:12:01 2024 00:15:16.016 read: IOPS=7, BW=7827KiB/s (8015kB/s)(99.0MiB/12952msec) 00:15:16.016 slat (usec): min=695, max=2070.4k, avg=109472.88, stdev=450488.36 00:15:16.016 clat (msec): min=2113, max=12950, avg=10971.13, stdev=3034.44 00:15:16.016 lat (msec): min=4183, max=12951, avg=11080.60, stdev=2904.33 00:15:16.016 clat percentiles (msec): 00:15:16.016 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8490], 00:15:16.016 | 30.00th=[10671], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:15:16.016 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:15:16.016 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:15:16.016 | 99.99th=[12953] 00:15:16.016 lat (msec) : >=2000=100.00% 00:15:16.016 cpu : usr=0.01%, sys=0.64%, ctx=94, majf=0, minf=25345 00:15:16.016 IO depths : 1=1.0%, 2=2.0%, 4=4.0%, 8=8.1%, 16=16.2%, 32=32.3%, >=64=36.4% 00:15:16.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.016 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:16.016 issued rwts: total=99,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.016 job1: (groupid=0, jobs=1): err= 0: pid=3891745: Tue Nov 26 21:12:01 2024 00:15:16.016 read: IOPS=5, BW=5382KiB/s (5511kB/s)(57.0MiB/10845msec) 00:15:16.016 slat (usec): min=518, max=2159.4k, avg=188908.92, stdev=598162.90 00:15:16.016 clat (msec): min=76, max=10843, avg=7094.54, stdev=3106.63 00:15:16.016 lat (msec): min=2144, max=10844, avg=7283.45, stdev=2997.76 00:15:16.016 clat percentiles (msec): 00:15:16.016 | 1.00th=[ 78], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 4329], 00:15:16.016 | 30.00th=[ 6409], 40.00th=[ 6477], 50.00th=[ 6477], 60.00th=[ 8658], 00:15:16.016 | 70.00th=[ 8658], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:15:16.016 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.016 | 99.99th=[10805] 00:15:16.016 lat (msec) : 100=1.75%, >=2000=98.25% 00:15:16.016 cpu : usr=0.00%, sys=0.41%, ctx=47, majf=0, minf=14593 00:15:16.016 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:15:16.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.016 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.016 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.016 job1: (groupid=0, jobs=1): err= 0: pid=3891746: Tue Nov 26 21:12:01 2024 00:15:16.016 read: IOPS=59, BW=59.0MiB/s (61.9MB/s)(644MiB/10911msec) 00:15:16.016 slat (usec): min=61, max=2137.7k, avg=16839.88, stdev=165396.40 00:15:16.016 clat (msec): min=63, max=8918, avg=2080.55, stdev=3253.16 00:15:16.016 lat (msec): min=244, max=8919, avg=2097.39, stdev=3261.85 00:15:16.016 clat percentiles (msec): 00:15:16.016 | 1.00th=[ 245], 5.00th=[ 247], 10.00th=[ 247], 20.00th=[ 255], 00:15:16.016 | 30.00th=[ 334], 40.00th=[ 489], 50.00th=[ 518], 60.00th=[ 531], 00:15:16.016 | 70.00th=[ 659], 80.00th=[ 4329], 90.00th=[ 8792], 95.00th=[ 8792], 00:15:16.016 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:15:16.016 | 99.99th=[ 8926] 00:15:16.016 bw ( KiB/s): min= 2048, max=432128, per=5.16%, avg=150966.86, stdev=161613.52, samples=7 00:15:16.016 iops : min= 2, max= 422, avg=147.43, stdev=157.83, samples=7 00:15:16.016 lat (msec) : 100=0.16%, 250=14.29%, 500=27.17%, 750=30.90%, 1000=6.06% 00:15:16.016 lat (msec) : >=2000=21.43% 00:15:16.016 cpu : usr=0.04%, sys=1.03%, ctx=988, majf=0, minf=32769 00:15:16.016 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=5.0%, >=64=90.2% 00:15:16.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.016 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:16.016 issued rwts: total=644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.016 job1: (groupid=0, jobs=1): err= 0: pid=3891747: Tue Nov 26 21:12:01 2024 00:15:16.016 read: IOPS=148, BW=149MiB/s (156MB/s)(1924MiB/12942msec) 00:15:16.016 slat (usec): min=51, max=2108.7k, avg=5628.51, stdev=67567.63 00:15:16.016 clat (msec): min=229, max=6658, avg=832.39, stdev=1512.57 00:15:16.016 lat (msec): min=230, max=6658, avg=838.01, stdev=1517.50 00:15:16.016 clat percentiles (msec): 00:15:16.016 | 1.00th=[ 232], 5.00th=[ 234], 10.00th=[ 234], 20.00th=[ 236], 00:15:16.016 | 30.00th=[ 239], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 584], 00:15:16.016 | 70.00th=[ 625], 80.00th=[ 760], 90.00th=[ 810], 95.00th=[ 6477], 00:15:16.016 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6678], 99.95th=[ 6678], 00:15:16.016 | 99.99th=[ 6678] 00:15:16.016 bw ( KiB/s): min= 2048, max=552960, per=8.38%, avg=245350.40, stdev=203464.74, samples=15 00:15:16.016 iops : min= 2, max= 540, avg=239.60, stdev=198.70, samples=15 00:15:16.016 lat (msec) : 250=50.00%, 500=4.42%, 750=24.22%, 1000=14.40%, >=2000=6.96% 00:15:16.016 cpu : usr=0.05%, sys=2.34%, ctx=1689, majf=0, minf=32769 00:15:16.016 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:15:16.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.016 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.016 issued rwts: total=1924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.016 job1: (groupid=0, jobs=1): err= 0: pid=3891748: Tue Nov 26 21:12:01 2024 00:15:16.016 read: IOPS=91, BW=91.9MiB/s (96.4MB/s)(999MiB/10870msec) 00:15:16.016 slat (usec): min=52, max=2083.1k, avg=10811.65, stdev=99491.21 00:15:16.016 clat (msec): min=64, max=6414, avg=1329.16, stdev=1222.91 00:15:16.016 lat (msec): min=420, max=6493, avg=1339.97, stdev=1226.05 00:15:16.016 clat percentiles (msec): 00:15:16.016 | 1.00th=[ 422], 5.00th=[ 430], 10.00th=[ 460], 20.00th=[ 575], 00:15:16.016 | 30.00th=[ 642], 40.00th=[ 651], 50.00th=[ 659], 60.00th=[ 693], 00:15:16.016 | 70.00th=[ 1020], 80.00th=[ 2668], 90.00th=[ 3775], 95.00th=[ 3977], 00:15:16.016 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 6409], 99.95th=[ 6409], 00:15:16.016 | 99.99th=[ 6409] 00:15:16.016 bw ( KiB/s): min=10240, max=303104, per=5.08%, avg=148650.67, stdev=83128.06, samples=12 00:15:16.016 iops : min= 10, max= 296, avg=145.17, stdev=81.18, samples=12 00:15:16.016 lat (msec) : 100=0.10%, 500=13.71%, 750=49.35%, 1000=6.21%, 2000=5.31% 00:15:16.016 lat (msec) : >=2000=25.33% 00:15:16.016 cpu : usr=0.03%, sys=1.47%, ctx=1094, majf=0, minf=32769 00:15:16.016 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:15:16.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.016 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.017 issued rwts: total=999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.017 job2: (groupid=0, jobs=1): err= 0: pid=3891753: Tue Nov 26 21:12:01 2024 00:15:16.017 read: IOPS=2, BW=3022KiB/s (3094kB/s)(32.0MiB/10844msec) 00:15:16.017 slat (usec): min=1374, max=2095.3k, avg=336221.95, stdev=756764.39 00:15:16.017 clat (msec): min=83, max=10835, avg=7462.66, stdev=3590.95 00:15:16.017 lat (msec): min=2146, max=10843, avg=7798.88, stdev=3374.98 00:15:16.017 clat percentiles (msec): 00:15:16.017 | 1.00th=[ 85], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4329], 00:15:16.017 | 30.00th=[ 4329], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[10671], 00:15:16.017 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:15:16.017 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.017 | 99.99th=[10805] 00:15:16.017 lat (msec) : 100=3.12%, >=2000=96.88% 00:15:16.017 cpu : usr=0.00%, sys=0.18%, ctx=82, majf=0, minf=8193 00:15:16.017 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:15:16.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.017 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:15:16.017 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.017 job2: (groupid=0, jobs=1): err= 0: pid=3891754: Tue Nov 26 21:12:01 2024 00:15:16.017 read: IOPS=148, BW=149MiB/s (156MB/s)(1494MiB/10050msec) 00:15:16.017 slat (usec): min=33, max=2050.4k, avg=6692.49, stdev=67923.67 00:15:16.017 clat (msec): min=45, max=5218, avg=827.95, stdev=1143.09 00:15:16.017 lat (msec): min=66, max=5219, avg=834.64, stdev=1147.40 00:15:16.017 clat percentiles (msec): 00:15:16.017 | 1.00th=[ 86], 5.00th=[ 266], 10.00th=[ 338], 20.00th=[ 363], 00:15:16.017 | 30.00th=[ 380], 40.00th=[ 397], 50.00th=[ 447], 60.00th=[ 481], 00:15:16.017 | 70.00th=[ 584], 80.00th=[ 751], 90.00th=[ 1053], 95.00th=[ 4597], 00:15:16.017 | 99.00th=[ 4665], 99.50th=[ 4665], 99.90th=[ 5201], 99.95th=[ 5201], 00:15:16.017 | 99.99th=[ 5201] 00:15:16.017 bw ( KiB/s): min= 8192, max=374784, per=6.83%, avg=199972.57, stdev=130127.32, samples=14 00:15:16.017 iops : min= 8, max= 366, avg=195.29, stdev=127.08, samples=14 00:15:16.017 lat (msec) : 50=0.07%, 100=1.07%, 250=3.15%, 500=58.63%, 750=17.07% 00:15:16.017 lat (msec) : 1000=8.63%, 2000=2.61%, >=2000=8.77% 00:15:16.017 cpu : usr=0.03%, sys=1.88%, ctx=1624, majf=0, minf=32769 00:15:16.017 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:15:16.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.017 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.017 issued rwts: total=1494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.017 job2: (groupid=0, jobs=1): err= 0: pid=3891755: Tue Nov 26 21:12:01 2024 00:15:16.017 read: IOPS=30, BW=30.6MiB/s (32.1MB/s)(393MiB/12828msec) 00:15:16.017 slat (usec): min=426, max=2165.5k, avg=27321.08, stdev=186173.39 00:15:16.017 clat (msec): min=1086, max=9572, avg=3876.14, stdev=3493.24 00:15:16.017 lat (msec): min=1093, max=9575, avg=3903.46, stdev=3498.24 00:15:16.017 clat percentiles (msec): 00:15:16.017 | 1.00th=[ 1099], 5.00th=[ 1133], 10.00th=[ 1217], 20.00th=[ 1250], 00:15:16.017 | 30.00th=[ 1318], 40.00th=[ 1469], 50.00th=[ 1754], 60.00th=[ 1838], 00:15:16.017 | 70.00th=[ 8557], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9329], 00:15:16.017 | 99.00th=[ 9463], 99.50th=[ 9597], 99.90th=[ 9597], 99.95th=[ 9597], 00:15:16.017 | 99.99th=[ 9597] 00:15:16.017 bw ( KiB/s): min= 2048, max=112640, per=2.07%, avg=60530.22, stdev=48810.07, samples=9 00:15:16.017 iops : min= 2, max= 110, avg=59.11, stdev=47.67, samples=9 00:15:16.017 lat (msec) : 2000=66.67%, >=2000=33.33% 00:15:16.017 cpu : usr=0.00%, sys=0.75%, ctx=779, majf=0, minf=32769 00:15:16.017 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.0% 00:15:16.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.017 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:15:16.017 issued rwts: total=393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.017 job2: (groupid=0, jobs=1): err= 0: pid=3891756: Tue Nov 26 21:12:01 2024 00:15:16.017 read: IOPS=3, BW=4037KiB/s (4134kB/s)(51.0MiB/12935msec) 00:15:16.017 slat (usec): min=639, max=2131.9k, avg=212057.79, stdev=624252.64 00:15:16.017 clat (msec): min=2119, max=12932, avg=11777.86, stdev=2652.93 00:15:16.017 lat (msec): min=4192, max=12934, avg=11989.92, stdev=2270.08 00:15:16.017 clat percentiles (msec): 00:15:16.017 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 8490], 20.00th=[12684], 00:15:16.017 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12953], 60.00th=[12953], 00:15:16.017 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:15:16.017 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:15:16.017 | 99.99th=[12953] 00:15:16.017 lat (msec) : >=2000=100.00% 00:15:16.017 cpu : usr=0.00%, sys=0.39%, ctx=94, majf=0, minf=13057 00:15:16.017 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:15:16.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.017 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.017 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.017 job2: (groupid=0, jobs=1): err= 0: pid=3891757: Tue Nov 26 21:12:01 2024 00:15:16.017 read: IOPS=4, BW=4454KiB/s (4561kB/s)(47.0MiB/10805msec) 00:15:16.017 slat (usec): min=665, max=2095.1k, avg=228117.27, stdev=644847.19 00:15:16.017 clat (msec): min=82, max=10792, avg=7303.28, stdev=2701.93 00:15:16.017 lat (msec): min=2156, max=10804, avg=7531.40, stdev=2525.90 00:15:16.017 clat percentiles (msec): 00:15:16.017 | 1.00th=[ 83], 5.00th=[ 2198], 10.00th=[ 4279], 20.00th=[ 4329], 00:15:16.017 | 30.00th=[ 6477], 40.00th=[ 6477], 50.00th=[ 6544], 60.00th=[ 8658], 00:15:16.017 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:15:16.017 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.017 | 99.99th=[10805] 00:15:16.017 lat (msec) : 100=2.13%, >=2000=97.87% 00:15:16.017 cpu : usr=0.00%, sys=0.33%, ctx=60, majf=0, minf=12033 00:15:16.017 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:15:16.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.017 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.017 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.017 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.017 job2: (groupid=0, jobs=1): err= 0: pid=3891758: Tue Nov 26 21:12:01 2024 00:15:16.017 read: IOPS=10, BW=10.8MiB/s (11.4MB/s)(139MiB/12836msec) 00:15:16.017 slat (usec): min=499, max=4243.6k, avg=77177.60, stdev=436168.68 00:15:16.017 clat (msec): min=2107, max=8642, avg=7265.75, stdev=920.33 00:15:16.017 lat (msec): min=4178, max=10712, avg=7342.93, stdev=861.48 00:15:16.017 clat percentiles (msec): 00:15:16.017 | 1.00th=[ 4178], 5.00th=[ 5805], 10.00th=[ 6477], 20.00th=[ 7080], 00:15:16.017 | 30.00th=[ 7080], 40.00th=[ 7148], 50.00th=[ 7282], 60.00th=[ 7483], 00:15:16.017 | 70.00th=[ 7617], 80.00th=[ 7819], 90.00th=[ 8087], 95.00th=[ 8490], 00:15:16.017 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:15:16.017 | 99.99th=[ 8658] 00:15:16.018 bw ( KiB/s): min= 2052, max=10240, per=0.21%, avg=6145.00, stdev=3342.74, samples=4 00:15:16.018 iops : min= 2, max= 10, avg= 6.00, stdev= 3.27, samples=4 00:15:16.018 lat (msec) : >=2000=100.00% 00:15:16.018 cpu : usr=0.01%, sys=0.65%, ctx=345, majf=0, minf=32769 00:15:16.018 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=5.8%, 16=11.5%, 32=23.0%, >=64=54.7% 00:15:16.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.018 complete : 0=0.0%, 4=92.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=7.7% 00:15:16.018 issued rwts: total=139,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.018 job2: (groupid=0, jobs=1): err= 0: pid=3891759: Tue Nov 26 21:12:01 2024 00:15:16.018 read: IOPS=31, BW=31.3MiB/s (32.8MB/s)(401MiB/12826msec) 00:15:16.018 slat (usec): min=356, max=2199.2k, avg=26758.33, stdev=183623.71 00:15:16.018 clat (msec): min=1121, max=9866, avg=3816.02, stdev=3696.53 00:15:16.018 lat (msec): min=1128, max=9869, avg=3842.77, stdev=3703.01 00:15:16.018 clat percentiles (msec): 00:15:16.018 | 1.00th=[ 1133], 5.00th=[ 1167], 10.00th=[ 1183], 20.00th=[ 1234], 00:15:16.018 | 30.00th=[ 1284], 40.00th=[ 1318], 50.00th=[ 1351], 60.00th=[ 1385], 00:15:16.018 | 70.00th=[ 8557], 80.00th=[ 9194], 90.00th=[ 9463], 95.00th=[ 9731], 00:15:16.018 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:15:16.018 | 99.99th=[ 9866] 00:15:16.018 bw ( KiB/s): min= 2048, max=116736, per=1.92%, avg=56114.70, stdev=50713.08, samples=10 00:15:16.018 iops : min= 2, max= 114, avg=54.70, stdev=49.64, samples=10 00:15:16.018 lat (msec) : 2000=67.33%, >=2000=32.67% 00:15:16.018 cpu : usr=0.00%, sys=0.67%, ctx=779, majf=0, minf=32769 00:15:16.018 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.3% 00:15:16.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.018 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:15:16.018 issued rwts: total=401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.018 job2: (groupid=0, jobs=1): err= 0: pid=3891760: Tue Nov 26 21:12:01 2024 00:15:16.018 read: IOPS=4, BW=4989KiB/s (5109kB/s)(63.0MiB/12930msec) 00:15:16.018 slat (usec): min=764, max=2154.4k, avg=171938.12, stdev=567164.59 00:15:16.018 clat (msec): min=2097, max=12928, avg=11784.21, stdev=2354.38 00:15:16.018 lat (msec): min=4213, max=12929, avg=11956.15, stdev=2005.19 00:15:16.018 clat percentiles (msec): 00:15:16.018 | 1.00th=[ 2106], 5.00th=[ 6342], 10.00th=[ 8557], 20.00th=[10671], 00:15:16.018 | 30.00th=[12684], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:15:16.018 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:15:16.018 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:15:16.018 | 99.99th=[12953] 00:15:16.018 lat (msec) : >=2000=100.00% 00:15:16.018 cpu : usr=0.00%, sys=0.44%, ctx=82, majf=0, minf=16129 00:15:16.018 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:15:16.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.018 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.018 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.018 job2: (groupid=0, jobs=1): err= 0: pid=3891761: Tue Nov 26 21:12:01 2024 00:15:16.018 read: IOPS=36, BW=36.8MiB/s (38.6MB/s)(472MiB/12822msec) 00:15:16.018 slat (usec): min=44, max=2162.0k, avg=22686.29, stdev=168800.95 00:15:16.018 clat (msec): min=432, max=6418, avg=2133.75, stdev=1730.73 00:15:16.018 lat (msec): min=436, max=7118, avg=2156.43, stdev=1744.59 00:15:16.018 clat percentiles (msec): 00:15:16.018 | 1.00th=[ 447], 5.00th=[ 468], 10.00th=[ 506], 20.00th=[ 535], 00:15:16.018 | 30.00th=[ 1053], 40.00th=[ 1418], 50.00th=[ 1519], 60.00th=[ 1670], 00:15:16.018 | 70.00th=[ 1720], 80.00th=[ 4111], 90.00th=[ 5269], 95.00th=[ 5269], 00:15:16.018 | 99.00th=[ 5336], 99.50th=[ 5403], 99.90th=[ 6409], 99.95th=[ 6409], 00:15:16.018 | 99.99th=[ 6409] 00:15:16.018 bw ( KiB/s): min= 2048, max=245760, per=3.02%, avg=88297.50, stdev=85773.17, samples=8 00:15:16.018 iops : min= 2, max= 240, avg=86.12, stdev=83.76, samples=8 00:15:16.018 lat (msec) : 500=8.05%, 750=20.13%, 1000=0.64%, 2000=43.43%, >=2000=27.75% 00:15:16.018 cpu : usr=0.00%, sys=0.71%, ctx=800, majf=0, minf=32769 00:15:16.018 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.7% 00:15:16.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.018 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:15:16.018 issued rwts: total=472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.018 job2: (groupid=0, jobs=1): err= 0: pid=3891762: Tue Nov 26 21:12:01 2024 00:15:16.018 read: IOPS=3, BW=3862KiB/s (3954kB/s)(41.0MiB/10872msec) 00:15:16.018 slat (usec): min=776, max=2080.5k, avg=263005.02, stdev=680726.50 00:15:16.018 clat (msec): min=88, max=10868, avg=8504.94, stdev=3284.85 00:15:16.018 lat (msec): min=2155, max=10871, avg=8767.95, stdev=3014.67 00:15:16.018 clat percentiles (msec): 00:15:16.018 | 1.00th=[ 88], 5.00th=[ 2198], 10.00th=[ 4279], 20.00th=[ 4396], 00:15:16.018 | 30.00th=[ 8658], 40.00th=[ 8658], 50.00th=[10805], 60.00th=[10805], 00:15:16.018 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:15:16.018 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.018 | 99.99th=[10805] 00:15:16.018 lat (msec) : 100=2.44%, >=2000=97.56% 00:15:16.018 cpu : usr=0.00%, sys=0.34%, ctx=76, majf=0, minf=10497 00:15:16.018 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:15:16.018 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.018 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.018 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.018 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.018 job2: (groupid=0, jobs=1): err= 0: pid=3891763: Tue Nov 26 21:12:01 2024 00:15:16.018 read: IOPS=18, BW=18.2MiB/s (19.1MB/s)(234MiB/12835msec) 00:15:16.018 slat (usec): min=73, max=2149.8k, avg=45916.21, stdev=279109.41 00:15:16.018 clat (msec): min=602, max=8084, avg=4954.31, stdev=3241.79 00:15:16.018 lat (msec): min=604, max=8085, avg=5000.23, stdev=3227.88 00:15:16.018 clat percentiles (msec): 00:15:16.018 | 1.00th=[ 609], 5.00th=[ 617], 10.00th=[ 625], 20.00th=[ 651], 00:15:16.018 | 30.00th=[ 718], 40.00th=[ 4799], 50.00th=[ 7550], 60.00th=[ 7617], 00:15:16.018 | 70.00th=[ 7752], 80.00th=[ 7886], 90.00th=[ 7953], 95.00th=[ 8020], 00:15:16.018 | 99.00th=[ 8087], 99.50th=[ 8087], 99.90th=[ 8087], 99.95th=[ 8087], 00:15:16.018 | 99.99th=[ 8087] 00:15:16.018 bw ( KiB/s): min= 2048, max=141312, per=1.50%, avg=43828.00, stdev=61721.98, samples=5 00:15:16.018 iops : min= 2, max= 138, avg=42.80, stdev=60.28, samples=5 00:15:16.018 lat (msec) : 750=32.05%, 2000=0.43%, >=2000=67.52% 00:15:16.018 cpu : usr=0.00%, sys=0.83%, ctx=212, majf=0, minf=32769 00:15:16.019 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.7%, >=64=73.1% 00:15:16.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.019 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:15:16.019 issued rwts: total=234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.019 job2: (groupid=0, jobs=1): err= 0: pid=3891764: Tue Nov 26 21:12:01 2024 00:15:16.019 read: IOPS=8, BW=8220KiB/s (8417kB/s)(103MiB/12831msec) 00:15:16.019 slat (usec): min=550, max=2080.4k, avg=104020.01, stdev=423279.01 00:15:16.019 clat (msec): min=2115, max=12821, avg=6252.32, stdev=2321.61 00:15:16.019 lat (msec): min=4015, max=12830, avg=6356.34, stdev=2373.89 00:15:16.019 clat percentiles (msec): 00:15:16.019 | 1.00th=[ 4010], 5.00th=[ 4044], 10.00th=[ 4077], 20.00th=[ 4144], 00:15:16.019 | 30.00th=[ 6007], 40.00th=[ 6074], 50.00th=[ 6074], 60.00th=[ 6141], 00:15:16.019 | 70.00th=[ 6208], 80.00th=[ 6275], 90.00th=[10671], 95.00th=[12818], 00:15:16.019 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:15:16.019 | 99.99th=[12818] 00:15:16.019 lat (msec) : >=2000=100.00% 00:15:16.019 cpu : usr=0.00%, sys=0.41%, ctx=234, majf=0, minf=26369 00:15:16.019 IO depths : 1=1.0%, 2=1.9%, 4=3.9%, 8=7.8%, 16=15.5%, 32=31.1%, >=64=38.8% 00:15:16.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.019 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:16.019 issued rwts: total=103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.019 job2: (groupid=0, jobs=1): err= 0: pid=3891765: Tue Nov 26 21:12:01 2024 00:15:16.019 read: IOPS=2, BW=2616KiB/s (2679kB/s)(33.0MiB/12915msec) 00:15:16.019 slat (usec): min=769, max=2136.7k, avg=327113.06, stdev=751234.35 00:15:16.019 clat (msec): min=2119, max=12913, avg=10443.84, stdev=3573.08 00:15:16.019 lat (msec): min=4180, max=12914, avg=10770.96, stdev=3268.27 00:15:16.019 clat percentiles (msec): 00:15:16.019 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:15:16.019 | 30.00th=[ 8490], 40.00th=[12684], 50.00th=[12818], 60.00th=[12953], 00:15:16.019 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:15:16.019 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:15:16.019 | 99.99th=[12953] 00:15:16.019 lat (msec) : >=2000=100.00% 00:15:16.019 cpu : usr=0.00%, sys=0.27%, ctx=73, majf=0, minf=8449 00:15:16.019 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:15:16.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.019 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.019 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.019 job3: (groupid=0, jobs=1): err= 0: pid=3891770: Tue Nov 26 21:12:01 2024 00:15:16.019 read: IOPS=3, BW=3478KiB/s (3562kB/s)(37.0MiB/10893msec) 00:15:16.019 slat (usec): min=1003, max=2102.2k, avg=291921.31, stdev=715176.11 00:15:16.019 clat (msec): min=90, max=10889, avg=8906.17, stdev=3095.53 00:15:16.019 lat (msec): min=2149, max=10892, avg=9198.09, stdev=2728.68 00:15:16.019 clat percentiles (msec): 00:15:16.019 | 1.00th=[ 91], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 6477], 00:15:16.019 | 30.00th=[ 8658], 40.00th=[10671], 50.00th=[10805], 60.00th=[10805], 00:15:16.019 | 70.00th=[10805], 80.00th=[10939], 90.00th=[10939], 95.00th=[10939], 00:15:16.019 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:15:16.019 | 99.99th=[10939] 00:15:16.019 lat (msec) : 100=2.70%, >=2000=97.30% 00:15:16.019 cpu : usr=0.00%, sys=0.34%, ctx=89, majf=0, minf=9473 00:15:16.019 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:15:16.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.019 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.019 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.019 job3: (groupid=0, jobs=1): err= 0: pid=3891771: Tue Nov 26 21:12:01 2024 00:15:16.019 read: IOPS=72, BW=72.1MiB/s (75.6MB/s)(729MiB/10106msec) 00:15:16.019 slat (usec): min=37, max=2041.8k, avg=13742.55, stdev=130250.43 00:15:16.019 clat (msec): min=84, max=6996, avg=1199.55, stdev=1698.17 00:15:16.019 lat (msec): min=172, max=7003, avg=1213.29, stdev=1711.19 00:15:16.019 clat percentiles (msec): 00:15:16.019 | 1.00th=[ 180], 5.00th=[ 393], 10.00th=[ 430], 20.00th=[ 468], 00:15:16.019 | 30.00th=[ 510], 40.00th=[ 575], 50.00th=[ 617], 60.00th=[ 718], 00:15:16.019 | 70.00th=[ 818], 80.00th=[ 911], 90.00th=[ 2869], 95.00th=[ 6879], 00:15:16.019 | 99.00th=[ 6946], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:15:16.019 | 99.99th=[ 7013] 00:15:16.019 bw ( KiB/s): min=47104, max=292864, per=6.01%, avg=175835.43, stdev=77314.18, samples=7 00:15:16.019 iops : min= 46, max= 286, avg=171.71, stdev=75.50, samples=7 00:15:16.019 lat (msec) : 100=0.14%, 250=2.19%, 500=25.38%, 750=37.86%, 1000=20.85% 00:15:16.019 lat (msec) : 2000=1.51%, >=2000=12.07% 00:15:16.019 cpu : usr=0.06%, sys=1.44%, ctx=650, majf=0, minf=32769 00:15:16.019 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:15:16.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.019 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:16.019 issued rwts: total=729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.019 job3: (groupid=0, jobs=1): err= 0: pid=3891772: Tue Nov 26 21:12:01 2024 00:15:16.019 read: IOPS=3, BW=3727KiB/s (3817kB/s)(47.0MiB/12913msec) 00:15:16.019 slat (usec): min=861, max=2119.4k, avg=230156.32, stdev=644816.36 00:15:16.019 clat (msec): min=2095, max=12910, avg=10783.21, stdev=3096.50 00:15:16.019 lat (msec): min=4167, max=12912, avg=11013.37, stdev=2826.98 00:15:16.019 clat percentiles (msec): 00:15:16.019 | 1.00th=[ 2089], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 8490], 00:15:16.019 | 30.00th=[10671], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:15:16.019 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:15:16.019 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:15:16.019 | 99.99th=[12953] 00:15:16.019 lat (msec) : >=2000=100.00% 00:15:16.019 cpu : usr=0.00%, sys=0.33%, ctx=90, majf=0, minf=12033 00:15:16.019 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:15:16.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.019 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.019 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.020 job3: (groupid=0, jobs=1): err= 0: pid=3891773: Tue Nov 26 21:12:01 2024 00:15:16.020 read: IOPS=1, BW=1609KiB/s (1647kB/s)(17.0MiB/10822msec) 00:15:16.020 slat (msec): min=6, max=4190, avg=631.37, stdev=1226.26 00:15:16.020 clat (msec): min=88, max=10798, avg=7963.88, stdev=3674.24 00:15:16.020 lat (msec): min=2173, max=10821, avg=8595.25, stdev=3116.15 00:15:16.020 clat percentiles (msec): 00:15:16.020 | 1.00th=[ 89], 5.00th=[ 89], 10.00th=[ 2165], 20.00th=[ 4329], 00:15:16.020 | 30.00th=[ 6477], 40.00th=[ 6477], 50.00th=[10671], 60.00th=[10671], 00:15:16.020 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:15:16.020 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.020 | 99.99th=[10805] 00:15:16.020 lat (msec) : 100=5.88%, >=2000=94.12% 00:15:16.020 cpu : usr=0.00%, sys=0.11%, ctx=60, majf=0, minf=4353 00:15:16.020 IO depths : 1=5.9%, 2=11.8%, 4=23.5%, 8=47.1%, 16=11.8%, 32=0.0%, >=64=0.0% 00:15:16.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.020 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:15:16.020 issued rwts: total=17,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.020 job3: (groupid=0, jobs=1): err= 0: pid=3891774: Tue Nov 26 21:12:01 2024 00:15:16.020 read: IOPS=56, BW=56.3MiB/s (59.0MB/s)(607MiB/10785msec) 00:15:16.020 slat (usec): min=49, max=2097.6k, avg=17617.28, stdev=164262.83 00:15:16.020 clat (msec): min=88, max=8600, avg=1784.04, stdev=2543.63 00:15:16.020 lat (msec): min=326, max=10556, avg=1801.66, stdev=2560.05 00:15:16.020 clat percentiles (msec): 00:15:16.020 | 1.00th=[ 326], 5.00th=[ 351], 10.00th=[ 372], 20.00th=[ 384], 00:15:16.020 | 30.00th=[ 393], 40.00th=[ 414], 50.00th=[ 426], 60.00th=[ 430], 00:15:16.020 | 70.00th=[ 464], 80.00th=[ 4329], 90.00th=[ 6812], 95.00th=[ 6879], 00:15:16.020 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 8658], 99.95th=[ 8658], 00:15:16.020 | 99.99th=[ 8658] 00:15:16.020 bw ( KiB/s): min= 2048, max=366592, per=4.80%, avg=140432.43, stdev=157661.74, samples=7 00:15:16.020 iops : min= 2, max= 358, avg=137.00, stdev=154.11, samples=7 00:15:16.020 lat (msec) : 100=0.16%, 500=73.31%, 750=2.14%, >=2000=24.38% 00:15:16.020 cpu : usr=0.02%, sys=1.11%, ctx=610, majf=0, minf=32769 00:15:16.020 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:15:16.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.020 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:16.020 issued rwts: total=607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.020 job3: (groupid=0, jobs=1): err= 0: pid=3891775: Tue Nov 26 21:12:01 2024 00:15:16.020 read: IOPS=5, BW=6045KiB/s (6190kB/s)(64.0MiB/10842msec) 00:15:16.020 slat (usec): min=368, max=2054.0k, avg=167672.05, stdev=547606.16 00:15:16.020 clat (msec): min=110, max=10804, avg=7216.99, stdev=3302.03 00:15:16.020 lat (msec): min=2148, max=10841, avg=7384.66, stdev=3206.53 00:15:16.020 clat percentiles (msec): 00:15:16.020 | 1.00th=[ 111], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4279], 00:15:16.020 | 30.00th=[ 4396], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[ 8658], 00:15:16.020 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:15:16.020 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.020 | 99.99th=[10805] 00:15:16.020 lat (msec) : 250=1.56%, >=2000=98.44% 00:15:16.020 cpu : usr=0.00%, sys=0.42%, ctx=56, majf=0, minf=16385 00:15:16.020 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:16.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.020 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.020 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.020 job3: (groupid=0, jobs=1): err= 0: pid=3891776: Tue Nov 26 21:12:01 2024 00:15:16.020 read: IOPS=5, BW=5179KiB/s (5303kB/s)(55.0MiB/10875msec) 00:15:16.020 slat (usec): min=980, max=2089.6k, avg=195667.97, stdev=595965.79 00:15:16.020 clat (msec): min=112, max=10873, avg=8300.19, stdev=3046.50 00:15:16.020 lat (msec): min=2154, max=10874, avg=8495.86, stdev=2850.18 00:15:16.020 clat percentiles (msec): 00:15:16.020 | 1.00th=[ 113], 5.00th=[ 2165], 10.00th=[ 4279], 20.00th=[ 4329], 00:15:16.020 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[ 8658], 60.00th=[10805], 00:15:16.020 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:15:16.020 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:15:16.020 | 99.99th=[10939] 00:15:16.020 lat (msec) : 250=1.82%, >=2000=98.18% 00:15:16.020 cpu : usr=0.00%, sys=0.46%, ctx=82, majf=0, minf=14081 00:15:16.020 IO depths : 1=1.8%, 2=3.6%, 4=7.3%, 8=14.5%, 16=29.1%, 32=43.6%, >=64=0.0% 00:15:16.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.020 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.020 issued rwts: total=55,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.020 job3: (groupid=0, jobs=1): err= 0: pid=3891777: Tue Nov 26 21:12:01 2024 00:15:16.020 read: IOPS=2, BW=2452KiB/s (2511kB/s)(26.0MiB/10857msec) 00:15:16.020 slat (usec): min=1678, max=2099.4k, avg=414065.16, stdev=827004.23 00:15:16.020 clat (msec): min=91, max=10853, avg=6483.41, stdev=3544.80 00:15:16.020 lat (msec): min=2148, max=10856, avg=6897.47, stdev=3393.81 00:15:16.020 clat percentiles (msec): 00:15:16.020 | 1.00th=[ 91], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 2198], 00:15:16.020 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8658], 00:15:16.020 | 70.00th=[ 8658], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:15:16.020 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.020 | 99.99th=[10805] 00:15:16.020 lat (msec) : 100=3.85%, >=2000=96.15% 00:15:16.020 cpu : usr=0.00%, sys=0.18%, ctx=74, majf=0, minf=6657 00:15:16.020 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:15:16.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.020 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:15:16.020 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.020 job3: (groupid=0, jobs=1): err= 0: pid=3891778: Tue Nov 26 21:12:01 2024 00:15:16.020 read: IOPS=10, BW=10.0MiB/s (10.5MB/s)(109MiB/10866msec) 00:15:16.020 slat (usec): min=340, max=2165.5k, avg=98855.32, stdev=423085.55 00:15:16.020 clat (msec): min=89, max=10863, avg=4500.25, stdev=4080.22 00:15:16.020 lat (msec): min=1577, max=10865, avg=4599.10, stdev=4102.85 00:15:16.020 clat percentiles (msec): 00:15:16.020 | 1.00th=[ 1586], 5.00th=[ 1603], 10.00th=[ 1603], 20.00th=[ 1720], 00:15:16.020 | 30.00th=[ 1821], 40.00th=[ 1854], 50.00th=[ 1955], 60.00th=[ 2072], 00:15:16.020 | 70.00th=[ 6409], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:15:16.020 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.020 | 99.99th=[10805] 00:15:16.020 lat (msec) : 100=0.92%, 2000=55.05%, >=2000=44.04% 00:15:16.020 cpu : usr=0.00%, sys=0.71%, ctx=116, majf=0, minf=27905 00:15:16.020 IO depths : 1=0.9%, 2=1.8%, 4=3.7%, 8=7.3%, 16=14.7%, 32=29.4%, >=64=42.2% 00:15:16.020 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.020 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:16.020 issued rwts: total=109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.020 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.020 job3: (groupid=0, jobs=1): err= 0: pid=3891779: Tue Nov 26 21:12:01 2024 00:15:16.020 read: IOPS=9, BW=9960KiB/s (10.2MB/s)(125MiB/12852msec) 00:15:16.020 slat (usec): min=351, max=4186.8k, avg=86015.37, stdev=466094.87 00:15:16.020 clat (msec): min=2098, max=12838, avg=6400.30, stdev=1550.56 00:15:16.020 lat (msec): min=4167, max=12851, avg=6486.32, stdev=1607.22 00:15:16.020 clat percentiles (msec): 00:15:16.020 | 1.00th=[ 4178], 5.00th=[ 5604], 10.00th=[ 5671], 20.00th=[ 5805], 00:15:16.020 | 30.00th=[ 5805], 40.00th=[ 5940], 50.00th=[ 6007], 60.00th=[ 6141], 00:15:16.020 | 70.00th=[ 6208], 80.00th=[ 6275], 90.00th=[ 8490], 95.00th=[ 8490], 00:15:16.021 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:15:16.021 | 99.99th=[12818] 00:15:16.021 lat (msec) : >=2000=100.00% 00:15:16.021 cpu : usr=0.00%, sys=0.49%, ctx=130, majf=0, minf=32001 00:15:16.021 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.4%, 16=12.8%, 32=25.6%, >=64=49.6% 00:15:16.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.021 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:16.021 issued rwts: total=125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.021 job3: (groupid=0, jobs=1): err= 0: pid=3891781: Tue Nov 26 21:12:01 2024 00:15:16.021 read: IOPS=13, BW=13.5MiB/s (14.2MB/s)(147MiB/10870msec) 00:15:16.021 slat (usec): min=525, max=2092.2k, avg=73183.50, stdev=340273.59 00:15:16.021 clat (msec): min=110, max=10818, avg=8604.07, stdev=2432.77 00:15:16.021 lat (msec): min=2144, max=10858, avg=8677.25, stdev=2333.11 00:15:16.021 clat percentiles (msec): 00:15:16.021 | 1.00th=[ 2140], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 6477], 00:15:16.021 | 30.00th=[ 8658], 40.00th=[ 9597], 50.00th=[ 9597], 60.00th=[ 9866], 00:15:16.021 | 70.00th=[10000], 80.00th=[10268], 90.00th=[10402], 95.00th=[10537], 00:15:16.021 | 99.00th=[10537], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.021 | 99.99th=[10805] 00:15:16.021 bw ( KiB/s): min=10240, max=16384, per=0.44%, avg=12970.67, stdev=3128.37, samples=3 00:15:16.021 iops : min= 10, max= 16, avg=12.67, stdev= 3.06, samples=3 00:15:16.021 lat (msec) : 250=0.68%, >=2000=99.32% 00:15:16.021 cpu : usr=0.00%, sys=0.83%, ctx=318, majf=0, minf=32769 00:15:16.021 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=5.4%, 16=10.9%, 32=21.8%, >=64=57.1% 00:15:16.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.021 complete : 0=0.0%, 4=95.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=4.8% 00:15:16.021 issued rwts: total=147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.021 job3: (groupid=0, jobs=1): err= 0: pid=3891782: Tue Nov 26 21:12:01 2024 00:15:16.021 read: IOPS=3, BW=4071KiB/s (4168kB/s)(43.0MiB/10817msec) 00:15:16.021 slat (usec): min=766, max=2115.8k, avg=249489.87, stdev=672401.58 00:15:16.021 clat (msec): min=88, max=10813, avg=7001.96, stdev=3615.85 00:15:16.021 lat (msec): min=2141, max=10816, avg=7251.45, stdev=3495.59 00:15:16.021 clat percentiles (msec): 00:15:16.021 | 1.00th=[ 89], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 2198], 00:15:16.021 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[10671], 00:15:16.021 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:15:16.021 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.021 | 99.99th=[10805] 00:15:16.021 lat (msec) : 100=2.33%, >=2000=97.67% 00:15:16.021 cpu : usr=0.01%, sys=0.31%, ctx=46, majf=0, minf=11009 00:15:16.021 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:15:16.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.021 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.021 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.021 job3: (groupid=0, jobs=1): err= 0: pid=3891783: Tue Nov 26 21:12:01 2024 00:15:16.021 read: IOPS=25, BW=25.8MiB/s (27.0MB/s)(333MiB/12932msec) 00:15:16.021 slat (usec): min=70, max=2111.0k, avg=32529.32, stdev=227512.27 00:15:16.021 clat (msec): min=633, max=11232, avg=4733.34, stdev=4742.03 00:15:16.021 lat (msec): min=635, max=11244, avg=4765.87, stdev=4749.18 00:15:16.021 clat percentiles (msec): 00:15:16.021 | 1.00th=[ 642], 5.00th=[ 651], 10.00th=[ 693], 20.00th=[ 852], 00:15:16.021 | 30.00th=[ 944], 40.00th=[ 1062], 50.00th=[ 1150], 60.00th=[ 2769], 00:15:16.021 | 70.00th=[10671], 80.00th=[10939], 90.00th=[11073], 95.00th=[11208], 00:15:16.021 | 99.00th=[11208], 99.50th=[11208], 99.90th=[11208], 99.95th=[11208], 00:15:16.021 | 99.99th=[11208] 00:15:16.021 bw ( KiB/s): min= 2048, max=208896, per=2.06%, avg=60269.71, stdev=79572.62, samples=7 00:15:16.021 iops : min= 2, max= 204, avg=58.86, stdev=77.71, samples=7 00:15:16.021 lat (msec) : 750=13.81%, 1000=21.32%, 2000=24.02%, >=2000=40.84% 00:15:16.021 cpu : usr=0.01%, sys=0.82%, ctx=458, majf=0, minf=32769 00:15:16.021 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.8%, 32=9.6%, >=64=81.1% 00:15:16.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.021 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:15:16.021 issued rwts: total=333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.021 job4: (groupid=0, jobs=1): err= 0: pid=3891794: Tue Nov 26 21:12:01 2024 00:15:16.021 read: IOPS=12, BW=12.1MiB/s (12.7MB/s)(131MiB/10839msec) 00:15:16.021 slat (usec): min=846, max=2116.2k, avg=76342.60, stdev=360658.40 00:15:16.021 clat (msec): min=837, max=10831, avg=3934.03, stdev=3949.27 00:15:16.021 lat (msec): min=839, max=10832, avg=4010.37, stdev=3985.44 00:15:16.021 clat percentiles (msec): 00:15:16.021 | 1.00th=[ 844], 5.00th=[ 944], 10.00th=[ 1045], 20.00th=[ 1167], 00:15:16.021 | 30.00th=[ 1385], 40.00th=[ 1603], 50.00th=[ 1804], 60.00th=[ 1955], 00:15:16.021 | 70.00th=[ 2165], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:15:16.021 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.021 | 99.99th=[10805] 00:15:16.021 bw ( KiB/s): min= 8192, max= 8192, per=0.28%, avg=8192.00, stdev= 0.00, samples=1 00:15:16.021 iops : min= 8, max= 8, avg= 8.00, stdev= 0.00, samples=1 00:15:16.021 lat (msec) : 1000=9.16%, 2000=51.91%, >=2000=38.93% 00:15:16.021 cpu : usr=0.00%, sys=0.69%, ctx=239, majf=0, minf=32769 00:15:16.021 IO depths : 1=0.8%, 2=1.5%, 4=3.1%, 8=6.1%, 16=12.2%, 32=24.4%, >=64=51.9% 00:15:16.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.021 complete : 0=0.0%, 4=80.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=20.0% 00:15:16.021 issued rwts: total=131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.021 job4: (groupid=0, jobs=1): err= 0: pid=3891796: Tue Nov 26 21:12:01 2024 00:15:16.021 read: IOPS=184, BW=185MiB/s (194MB/s)(2003MiB/10830msec) 00:15:16.021 slat (usec): min=38, max=2064.2k, avg=5348.59, stdev=68155.08 00:15:16.021 clat (msec): min=104, max=5158, avg=654.43, stdev=1139.63 00:15:16.021 lat (msec): min=104, max=5185, avg=659.78, stdev=1145.40 00:15:16.021 clat percentiles (msec): 00:15:16.021 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 123], 20.00th=[ 124], 00:15:16.021 | 30.00th=[ 124], 40.00th=[ 125], 50.00th=[ 338], 60.00th=[ 355], 00:15:16.021 | 70.00th=[ 435], 80.00th=[ 667], 90.00th=[ 1452], 95.00th=[ 4665], 00:15:16.021 | 99.00th=[ 5067], 99.50th=[ 5134], 99.90th=[ 5134], 99.95th=[ 5134], 00:15:16.021 | 99.99th=[ 5134] 00:15:16.021 bw ( KiB/s): min= 1383, max=1056768, per=9.37%, avg=274375.71, stdev=303971.27, samples=14 00:15:16.021 iops : min= 1, max= 1032, avg=267.86, stdev=296.91, samples=14 00:15:16.021 lat (msec) : 250=46.08%, 500=25.96%, 750=10.48%, 1000=4.49%, 2000=6.09% 00:15:16.021 lat (msec) : >=2000=6.89% 00:15:16.021 cpu : usr=0.03%, sys=1.54%, ctx=2580, majf=0, minf=32769 00:15:16.021 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:15:16.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.021 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.021 issued rwts: total=2003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.021 job4: (groupid=0, jobs=1): err= 0: pid=3891797: Tue Nov 26 21:12:01 2024 00:15:16.021 read: IOPS=38, BW=38.1MiB/s (40.0MB/s)(414MiB/10852msec) 00:15:16.021 slat (usec): min=67, max=2108.0k, avg=25938.08, stdev=202715.89 00:15:16.021 clat (msec): min=111, max=9259, avg=3180.46, stdev=3590.03 00:15:16.021 lat (msec): min=441, max=9259, avg=3206.40, stdev=3596.86 00:15:16.021 clat percentiles (msec): 00:15:16.021 | 1.00th=[ 456], 5.00th=[ 498], 10.00th=[ 518], 20.00th=[ 575], 00:15:16.021 | 30.00th=[ 651], 40.00th=[ 726], 50.00th=[ 776], 60.00th=[ 844], 00:15:16.022 | 70.00th=[ 5000], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9060], 00:15:16.022 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:15:16.022 | 99.99th=[ 9194] 00:15:16.022 bw ( KiB/s): min= 4096, max=182272, per=2.50%, avg=73216.00, stdev=71237.69, samples=8 00:15:16.022 iops : min= 4, max= 178, avg=71.50, stdev=69.57, samples=8 00:15:16.022 lat (msec) : 250=0.24%, 500=5.07%, 750=37.20%, 1000=20.77%, >=2000=36.71% 00:15:16.022 cpu : usr=0.00%, sys=0.73%, ctx=546, majf=0, minf=32769 00:15:16.022 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.8% 00:15:16.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.022 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:15:16.022 issued rwts: total=414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.022 job4: (groupid=0, jobs=1): err= 0: pid=3891798: Tue Nov 26 21:12:01 2024 00:15:16.022 read: IOPS=3, BW=3953KiB/s (4048kB/s)(42.0MiB/10879msec) 00:15:16.022 slat (usec): min=738, max=2118.3k, avg=256252.11, stdev=677478.22 00:15:16.022 clat (msec): min=115, max=10877, avg=8829.12, stdev=2988.99 00:15:16.022 lat (msec): min=2233, max=10878, avg=9085.37, stdev=2667.85 00:15:16.022 clat percentiles (msec): 00:15:16.022 | 1.00th=[ 116], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 6477], 00:15:16.022 | 30.00th=[ 8557], 40.00th=[10805], 50.00th=[10805], 60.00th=[10805], 00:15:16.022 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10939], 00:15:16.022 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:15:16.022 | 99.99th=[10939] 00:15:16.022 lat (msec) : 250=2.38%, >=2000=97.62% 00:15:16.022 cpu : usr=0.00%, sys=0.38%, ctx=80, majf=0, minf=10753 00:15:16.022 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:15:16.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.022 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.022 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.022 job4: (groupid=0, jobs=1): err= 0: pid=3891799: Tue Nov 26 21:12:01 2024 00:15:16.022 read: IOPS=193, BW=193MiB/s (202MB/s)(1932MiB/10009msec) 00:15:16.022 slat (usec): min=38, max=96215, avg=5172.02, stdev=7345.40 00:15:16.022 clat (msec): min=8, max=1496, avg=617.22, stdev=292.83 00:15:16.022 lat (msec): min=9, max=1498, avg=622.39, stdev=294.26 00:15:16.022 clat percentiles (msec): 00:15:16.022 | 1.00th=[ 21], 5.00th=[ 257], 10.00th=[ 288], 20.00th=[ 363], 00:15:16.022 | 30.00th=[ 435], 40.00th=[ 485], 50.00th=[ 600], 60.00th=[ 667], 00:15:16.022 | 70.00th=[ 735], 80.00th=[ 844], 90.00th=[ 1011], 95.00th=[ 1133], 00:15:16.022 | 99.00th=[ 1485], 99.50th=[ 1485], 99.90th=[ 1502], 99.95th=[ 1502], 00:15:16.022 | 99.99th=[ 1502] 00:15:16.022 bw ( KiB/s): min=63488, max=423089, per=6.84%, avg=200278.29, stdev=97580.57, samples=17 00:15:16.022 iops : min= 62, max= 413, avg=195.53, stdev=95.31, samples=17 00:15:16.022 lat (msec) : 10=0.10%, 20=0.88%, 50=0.78%, 100=0.83%, 250=1.97% 00:15:16.022 lat (msec) : 500=37.06%, 750=30.12%, 1000=17.60%, 2000=10.66% 00:15:16.022 cpu : usr=0.11%, sys=2.00%, ctx=3318, majf=0, minf=32769 00:15:16.022 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:15:16.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.022 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.022 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.022 job4: (groupid=0, jobs=1): err= 0: pid=3891800: Tue Nov 26 21:12:01 2024 00:15:16.022 read: IOPS=1, BW=1512KiB/s (1548kB/s)(16.0MiB/10839msec) 00:15:16.022 slat (usec): min=707, max=4219.7k, avg=669991.98, stdev=1250137.86 00:15:16.022 clat (msec): min=119, max=10838, avg=5955.53, stdev=3728.74 00:15:16.022 lat (msec): min=2144, max=10838, avg=6625.52, stdev=3569.80 00:15:16.022 clat percentiles (msec): 00:15:16.022 | 1.00th=[ 120], 5.00th=[ 120], 10.00th=[ 2140], 20.00th=[ 2198], 00:15:16.022 | 30.00th=[ 4279], 40.00th=[ 4329], 50.00th=[ 4329], 60.00th=[ 6477], 00:15:16.022 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:15:16.022 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.022 | 99.99th=[10805] 00:15:16.022 lat (msec) : 250=6.25%, >=2000=93.75% 00:15:16.022 cpu : usr=0.00%, sys=0.11%, ctx=38, majf=0, minf=4097 00:15:16.022 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:15:16.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.022 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.022 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.022 job4: (groupid=0, jobs=1): err= 0: pid=3891801: Tue Nov 26 21:12:01 2024 00:15:16.022 read: IOPS=7, BW=7320KiB/s (7495kB/s)(78.0MiB/10912msec) 00:15:16.022 slat (usec): min=764, max=2102.0k, avg=138370.00, stdev=508959.40 00:15:16.022 clat (msec): min=118, max=10910, avg=7864.84, stdev=3543.57 00:15:16.022 lat (msec): min=2141, max=10911, avg=8003.21, stdev=3446.57 00:15:16.022 clat percentiles (msec): 00:15:16.022 | 1.00th=[ 120], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:15:16.022 | 30.00th=[ 4396], 40.00th=[ 6477], 50.00th=[10671], 60.00th=[10805], 00:15:16.022 | 70.00th=[10805], 80.00th=[10939], 90.00th=[10939], 95.00th=[10939], 00:15:16.022 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:15:16.022 | 99.99th=[10939] 00:15:16.022 lat (msec) : 250=1.28%, >=2000=98.72% 00:15:16.022 cpu : usr=0.00%, sys=0.60%, ctx=80, majf=0, minf=19969 00:15:16.022 IO depths : 1=1.3%, 2=2.6%, 4=5.1%, 8=10.3%, 16=20.5%, 32=41.0%, >=64=19.2% 00:15:16.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.022 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:15:16.022 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.022 job4: (groupid=0, jobs=1): err= 0: pid=3891802: Tue Nov 26 21:12:01 2024 00:15:16.022 read: IOPS=4, BW=4226KiB/s (4328kB/s)(45.0MiB/10903msec) 00:15:16.022 slat (usec): min=746, max=2224.4k, avg=239653.55, stdev=672254.81 00:15:16.022 clat (msec): min=118, max=10900, avg=9419.99, stdev=3035.48 00:15:16.022 lat (msec): min=2157, max=10902, avg=9659.65, stdev=2690.54 00:15:16.022 clat percentiles (msec): 00:15:16.022 | 1.00th=[ 118], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 8658], 00:15:16.022 | 30.00th=[10805], 40.00th=[10805], 50.00th=[10805], 60.00th=[10939], 00:15:16.022 | 70.00th=[10939], 80.00th=[10939], 90.00th=[10939], 95.00th=[10939], 00:15:16.022 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:15:16.022 | 99.99th=[10939] 00:15:16.022 lat (msec) : 250=2.22%, >=2000=97.78% 00:15:16.022 cpu : usr=0.00%, sys=0.39%, ctx=65, majf=0, minf=11521 00:15:16.022 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:15:16.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.022 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.022 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.022 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.022 job4: (groupid=0, jobs=1): err= 0: pid=3891803: Tue Nov 26 21:12:01 2024 00:15:16.022 read: IOPS=68, BW=68.7MiB/s (72.0MB/s)(741MiB/10791msec) 00:15:16.022 slat (usec): min=426, max=2050.8k, avg=14402.10, stdev=111064.12 00:15:16.022 clat (msec): min=115, max=4397, avg=1760.34, stdev=1126.16 00:15:16.023 lat (msec): min=608, max=6386, avg=1774.74, stdev=1133.59 00:15:16.023 clat percentiles (msec): 00:15:16.023 | 1.00th=[ 625], 5.00th=[ 676], 10.00th=[ 709], 20.00th=[ 768], 00:15:16.023 | 30.00th=[ 852], 40.00th=[ 1011], 50.00th=[ 1284], 60.00th=[ 1385], 00:15:16.023 | 70.00th=[ 2970], 80.00th=[ 3104], 90.00th=[ 3440], 95.00th=[ 3675], 00:15:16.023 | 99.00th=[ 3876], 99.50th=[ 3910], 99.90th=[ 4396], 99.95th=[ 4396], 00:15:16.023 | 99.99th=[ 4396] 00:15:16.023 bw ( KiB/s): min= 2048, max=206848, per=3.07%, avg=89819.43, stdev=67188.31, samples=14 00:15:16.023 iops : min= 2, max= 202, avg=87.71, stdev=65.61, samples=14 00:15:16.023 lat (msec) : 250=0.13%, 750=17.54%, 1000=21.59%, 2000=26.45%, >=2000=34.28% 00:15:16.023 cpu : usr=0.00%, sys=1.08%, ctx=1684, majf=0, minf=32769 00:15:16.023 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:15:16.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.023 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:16.023 issued rwts: total=741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.023 job4: (groupid=0, jobs=1): err= 0: pid=3891804: Tue Nov 26 21:12:01 2024 00:15:16.023 read: IOPS=3, BW=4065KiB/s (4162kB/s)(43.0MiB/10833msec) 00:15:16.023 slat (usec): min=778, max=2103.8k, avg=249226.40, stdev=663990.45 00:15:16.023 clat (msec): min=115, max=10817, avg=7346.62, stdev=3313.82 00:15:16.023 lat (msec): min=2153, max=10832, avg=7595.84, stdev=3156.30 00:15:16.023 clat percentiles (msec): 00:15:16.023 | 1.00th=[ 116], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:15:16.023 | 30.00th=[ 4396], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[ 8658], 00:15:16.023 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:15:16.023 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:15:16.023 | 99.99th=[10805] 00:15:16.023 lat (msec) : 250=2.33%, >=2000=97.67% 00:15:16.023 cpu : usr=0.00%, sys=0.32%, ctx=70, majf=0, minf=11009 00:15:16.023 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:15:16.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.023 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:15:16.023 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.023 job4: (groupid=0, jobs=1): err= 0: pid=3891805: Tue Nov 26 21:12:01 2024 00:15:16.023 read: IOPS=63, BW=63.5MiB/s (66.6MB/s)(687MiB/10811msec) 00:15:16.023 slat (usec): min=42, max=2073.4k, avg=14551.80, stdev=102165.70 00:15:16.023 clat (msec): min=654, max=6444, avg=1901.91, stdev=1201.31 00:15:16.023 lat (msec): min=657, max=6467, avg=1916.46, stdev=1211.58 00:15:16.023 clat percentiles (msec): 00:15:16.023 | 1.00th=[ 667], 5.00th=[ 709], 10.00th=[ 751], 20.00th=[ 936], 00:15:16.023 | 30.00th=[ 1250], 40.00th=[ 1334], 50.00th=[ 1385], 60.00th=[ 1469], 00:15:16.023 | 70.00th=[ 2123], 80.00th=[ 3004], 90.00th=[ 3809], 95.00th=[ 4732], 00:15:16.023 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 6477], 99.95th=[ 6477], 00:15:16.023 | 99.99th=[ 6477] 00:15:16.023 bw ( KiB/s): min= 2048, max=180224, per=3.26%, avg=95573.33, stdev=48632.08, samples=12 00:15:16.023 iops : min= 2, max= 176, avg=93.33, stdev=47.49, samples=12 00:15:16.023 lat (msec) : 750=9.17%, 1000=12.52%, 2000=45.85%, >=2000=32.46% 00:15:16.023 cpu : usr=0.01%, sys=1.42%, ctx=1338, majf=0, minf=32769 00:15:16.023 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:15:16.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.023 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:16.023 issued rwts: total=687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.023 job4: (groupid=0, jobs=1): err= 0: pid=3891806: Tue Nov 26 21:12:01 2024 00:15:16.023 read: IOPS=31, BW=31.0MiB/s (32.6MB/s)(337MiB/10854msec) 00:15:16.023 slat (usec): min=58, max=2033.9k, avg=31869.66, stdev=205885.01 00:15:16.023 clat (msec): min=111, max=6262, avg=3905.27, stdev=1790.26 00:15:16.023 lat (msec): min=698, max=6272, avg=3937.14, stdev=1783.77 00:15:16.023 clat percentiles (msec): 00:15:16.023 | 1.00th=[ 701], 5.00th=[ 760], 10.00th=[ 776], 20.00th=[ 2165], 00:15:16.023 | 30.00th=[ 3809], 40.00th=[ 3943], 50.00th=[ 3977], 60.00th=[ 4178], 00:15:16.023 | 70.00th=[ 5738], 80.00th=[ 5805], 90.00th=[ 5873], 95.00th=[ 6141], 00:15:16.023 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6275], 99.95th=[ 6275], 00:15:16.023 | 99.99th=[ 6275] 00:15:16.023 bw ( KiB/s): min= 2048, max=122880, per=1.62%, avg=47559.11, stdev=38489.02, samples=9 00:15:16.023 iops : min= 2, max= 120, avg=46.44, stdev=37.59, samples=9 00:15:16.023 lat (msec) : 250=0.30%, 750=3.26%, 1000=14.24%, 2000=1.48%, >=2000=80.71% 00:15:16.023 cpu : usr=0.00%, sys=0.88%, ctx=509, majf=0, minf=32769 00:15:16.023 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.5%, >=64=81.3% 00:15:16.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.023 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:15:16.023 issued rwts: total=337,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.023 job4: (groupid=0, jobs=1): err= 0: pid=3891807: Tue Nov 26 21:12:01 2024 00:15:16.023 read: IOPS=97, BW=97.1MiB/s (102MB/s)(974MiB/10034msec) 00:15:16.023 slat (usec): min=49, max=2053.4k, avg=10264.22, stdev=92966.03 00:15:16.023 clat (msec): min=32, max=5172, avg=803.34, stdev=676.33 00:15:16.023 lat (msec): min=35, max=5183, avg=813.61, stdev=690.42 00:15:16.023 clat percentiles (msec): 00:15:16.023 | 1.00th=[ 78], 5.00th=[ 418], 10.00th=[ 506], 20.00th=[ 558], 00:15:16.023 | 30.00th=[ 600], 40.00th=[ 659], 50.00th=[ 726], 60.00th=[ 776], 00:15:16.023 | 70.00th=[ 844], 80.00th=[ 877], 90.00th=[ 919], 95.00th=[ 944], 00:15:16.023 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5201], 99.95th=[ 5201], 00:15:16.023 | 99.99th=[ 5201] 00:15:16.023 bw ( KiB/s): min=118784, max=245760, per=5.92%, avg=173435.70, stdev=42662.76, samples=10 00:15:16.023 iops : min= 116, max= 240, avg=169.30, stdev=41.71, samples=10 00:15:16.023 lat (msec) : 50=0.51%, 100=0.62%, 250=1.75%, 500=6.78%, 750=44.56% 00:15:16.023 lat (msec) : 1000=42.61%, 2000=0.51%, >=2000=2.67% 00:15:16.023 cpu : usr=0.05%, sys=1.36%, ctx=1510, majf=0, minf=32769 00:15:16.023 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.5% 00:15:16.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.023 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.023 issued rwts: total=974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.023 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.023 job5: (groupid=0, jobs=1): err= 0: pid=3891814: Tue Nov 26 21:12:01 2024 00:15:16.023 read: IOPS=53, BW=53.8MiB/s (56.5MB/s)(539MiB/10012msec) 00:15:16.023 slat (usec): min=354, max=2100.9k, avg=18556.19, stdev=178430.19 00:15:16.023 clat (msec): min=8, max=9066, avg=615.21, stdev=1576.72 00:15:16.023 lat (msec): min=48, max=9068, avg=633.77, stdev=1617.90 00:15:16.023 clat percentiles (msec): 00:15:16.023 | 1.00th=[ 62], 5.00th=[ 192], 10.00th=[ 243], 20.00th=[ 243], 00:15:16.023 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 257], 60.00th=[ 271], 00:15:16.023 | 70.00th=[ 317], 80.00th=[ 363], 90.00th=[ 443], 95.00th=[ 2635], 00:15:16.023 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:15:16.023 | 99.99th=[ 9060] 00:15:16.023 bw ( KiB/s): min=460800, max=460800, per=15.74%, avg=460800.00, stdev= 0.00, samples=1 00:15:16.023 iops : min= 450, max= 450, avg=450.00, stdev= 0.00, samples=1 00:15:16.023 lat (msec) : 10=0.19%, 50=0.19%, 100=0.93%, 250=46.94%, 500=45.08% 00:15:16.023 lat (msec) : 750=1.67%, >=2000=5.01% 00:15:16.023 cpu : usr=0.01%, sys=0.91%, ctx=1059, majf=0, minf=32769 00:15:16.024 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3% 00:15:16.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.024 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:15:16.024 issued rwts: total=539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.024 job5: (groupid=0, jobs=1): err= 0: pid=3891815: Tue Nov 26 21:12:01 2024 00:15:16.024 read: IOPS=107, BW=107MiB/s (113MB/s)(1384MiB/12877msec) 00:15:16.024 slat (usec): min=36, max=2039.4k, avg=7746.49, stdev=71520.27 00:15:16.024 clat (msec): min=228, max=4895, avg=945.43, stdev=1057.62 00:15:16.024 lat (msec): min=229, max=4898, avg=953.18, stdev=1063.66 00:15:16.024 clat percentiles (msec): 00:15:16.024 | 1.00th=[ 232], 5.00th=[ 264], 10.00th=[ 317], 20.00th=[ 368], 00:15:16.024 | 30.00th=[ 409], 40.00th=[ 447], 50.00th=[ 493], 60.00th=[ 693], 00:15:16.024 | 70.00th=[ 852], 80.00th=[ 1167], 90.00th=[ 1469], 95.00th=[ 3977], 00:15:16.024 | 99.00th=[ 4279], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:15:16.024 | 99.99th=[ 4866] 00:15:16.024 bw ( KiB/s): min= 2048, max=442368, per=6.28%, avg=183840.21, stdev=130784.09, samples=14 00:15:16.024 iops : min= 2, max= 432, avg=179.50, stdev=127.69, samples=14 00:15:16.024 lat (msec) : 250=3.90%, 500=46.60%, 750=12.36%, 1000=12.93%, 2000=14.45% 00:15:16.024 lat (msec) : >=2000=9.75% 00:15:16.024 cpu : usr=0.03%, sys=1.21%, ctx=2234, majf=0, minf=32769 00:15:16.024 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:15:16.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.024 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.024 issued rwts: total=1384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.024 job5: (groupid=0, jobs=1): err= 0: pid=3891816: Tue Nov 26 21:12:01 2024 00:15:16.024 read: IOPS=75, BW=75.5MiB/s (79.2MB/s)(820MiB/10854msec) 00:15:16.024 slat (usec): min=46, max=2121.8k, avg=13087.55, stdev=125620.81 00:15:16.024 clat (msec): min=118, max=6442, avg=1293.64, stdev=1073.37 00:15:16.024 lat (msec): min=348, max=6472, avg=1306.73, stdev=1084.29 00:15:16.024 clat percentiles (msec): 00:15:16.024 | 1.00th=[ 351], 5.00th=[ 363], 10.00th=[ 376], 20.00th=[ 451], 00:15:16.024 | 30.00th=[ 510], 40.00th=[ 567], 50.00th=[ 802], 60.00th=[ 827], 00:15:16.024 | 70.00th=[ 2299], 80.00th=[ 2668], 90.00th=[ 2970], 95.00th=[ 3138], 00:15:16.024 | 99.00th=[ 3205], 99.50th=[ 4329], 99.90th=[ 6477], 99.95th=[ 6477], 00:15:16.024 | 99.99th=[ 6477] 00:15:16.024 bw ( KiB/s): min= 8192, max=325632, per=5.38%, avg=157423.89, stdev=102565.39, samples=9 00:15:16.024 iops : min= 8, max= 318, avg=153.67, stdev=100.13, samples=9 00:15:16.024 lat (msec) : 250=0.12%, 500=27.80%, 750=18.90%, 1000=21.34%, >=2000=31.83% 00:15:16.024 cpu : usr=0.02%, sys=1.08%, ctx=1550, majf=0, minf=32769 00:15:16.024 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:15:16.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.024 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.024 issued rwts: total=820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.024 job5: (groupid=0, jobs=1): err= 0: pid=3891817: Tue Nov 26 21:12:01 2024 00:15:16.024 read: IOPS=29, BW=29.1MiB/s (30.5MB/s)(292MiB/10032msec) 00:15:16.024 slat (usec): min=594, max=2093.3k, avg=34257.87, stdev=240314.28 00:15:16.024 clat (msec): min=27, max=9115, avg=1559.92, stdev=2430.72 00:15:16.024 lat (msec): min=31, max=9119, avg=1594.18, stdev=2469.45 00:15:16.024 clat percentiles (msec): 00:15:16.024 | 1.00th=[ 34], 5.00th=[ 73], 10.00th=[ 180], 20.00th=[ 342], 00:15:16.024 | 30.00th=[ 514], 40.00th=[ 693], 50.00th=[ 718], 60.00th=[ 743], 00:15:16.024 | 70.00th=[ 760], 80.00th=[ 768], 90.00th=[ 4933], 95.00th=[ 9060], 00:15:16.024 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:15:16.024 | 99.99th=[ 9060] 00:15:16.024 bw ( KiB/s): min=157696, max=180224, per=5.77%, avg=168960.00, stdev=15929.70, samples=2 00:15:16.024 iops : min= 154, max= 176, avg=165.00, stdev=15.56, samples=2 00:15:16.024 lat (msec) : 50=3.42%, 100=3.08%, 250=8.22%, 500=14.04%, 750=35.27% 00:15:16.024 lat (msec) : 1000=17.81%, >=2000=18.15% 00:15:16.024 cpu : usr=0.00%, sys=0.90%, ctx=513, majf=0, minf=32769 00:15:16.024 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:15:16.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.024 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:15:16.024 issued rwts: total=292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.024 job5: (groupid=0, jobs=1): err= 0: pid=3891818: Tue Nov 26 21:12:01 2024 00:15:16.024 read: IOPS=21, BW=21.0MiB/s (22.0MB/s)(228MiB/10850msec) 00:15:16.024 slat (usec): min=397, max=2041.3k, avg=47062.69, stdev=267807.76 00:15:16.024 clat (msec): min=118, max=7978, avg=4645.18, stdev=3002.63 00:15:16.024 lat (msec): min=939, max=7982, avg=4692.25, stdev=2984.11 00:15:16.024 clat percentiles (msec): 00:15:16.024 | 1.00th=[ 936], 5.00th=[ 953], 10.00th=[ 961], 20.00th=[ 1036], 00:15:16.024 | 30.00th=[ 1099], 40.00th=[ 3138], 50.00th=[ 5805], 60.00th=[ 7148], 00:15:16.024 | 70.00th=[ 7416], 80.00th=[ 7617], 90.00th=[ 7752], 95.00th=[ 7886], 00:15:16.024 | 99.00th=[ 7953], 99.50th=[ 7953], 99.90th=[ 7953], 99.95th=[ 7953], 00:15:16.024 | 99.99th=[ 7953] 00:15:16.024 bw ( KiB/s): min= 6144, max=100352, per=1.17%, avg=34133.33, stdev=40686.02, samples=6 00:15:16.024 iops : min= 6, max= 98, avg=33.33, stdev=39.73, samples=6 00:15:16.024 lat (msec) : 250=0.44%, 1000=16.23%, 2000=18.86%, >=2000=64.47% 00:15:16.024 cpu : usr=0.00%, sys=0.65%, ctx=623, majf=0, minf=32769 00:15:16.024 IO depths : 1=0.4%, 2=0.9%, 4=1.8%, 8=3.5%, 16=7.0%, 32=14.0%, >=64=72.4% 00:15:16.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.024 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:15:16.024 issued rwts: total=228,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.024 job5: (groupid=0, jobs=1): err= 0: pid=3891819: Tue Nov 26 21:12:01 2024 00:15:16.024 read: IOPS=130, BW=130MiB/s (137MB/s)(1314MiB/10076msec) 00:15:16.024 slat (usec): min=47, max=2175.7k, avg=7611.70, stdev=100781.03 00:15:16.024 clat (msec): min=69, max=6749, avg=557.23, stdev=1095.07 00:15:16.024 lat (msec): min=157, max=6752, avg=564.84, stdev=1108.29 00:15:16.024 clat percentiles (msec): 00:15:16.024 | 1.00th=[ 171], 5.00th=[ 232], 10.00th=[ 234], 20.00th=[ 239], 00:15:16.024 | 30.00th=[ 253], 40.00th=[ 271], 50.00th=[ 305], 60.00th=[ 342], 00:15:16.024 | 70.00th=[ 351], 80.00th=[ 380], 90.00th=[ 709], 95.00th=[ 936], 00:15:16.024 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:15:16.024 | 99.99th=[ 6745] 00:15:16.024 bw ( KiB/s): min=126976, max=534528, per=11.86%, avg=347282.29, stdev=156431.35, samples=7 00:15:16.024 iops : min= 124, max= 522, avg=339.14, stdev=152.76, samples=7 00:15:16.024 lat (msec) : 100=0.08%, 250=29.38%, 500=56.54%, 750=5.86%, 1000=3.81% 00:15:16.024 lat (msec) : >=2000=4.34% 00:15:16.024 cpu : usr=0.04%, sys=1.83%, ctx=1171, majf=0, minf=32769 00:15:16.024 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.4%, >=64=95.2% 00:15:16.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.024 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.024 issued rwts: total=1314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.024 job5: (groupid=0, jobs=1): err= 0: pid=3891821: Tue Nov 26 21:12:01 2024 00:15:16.024 read: IOPS=125, BW=126MiB/s (132MB/s)(1261MiB/10033msec) 00:15:16.025 slat (usec): min=43, max=1285.8k, avg=7927.66, stdev=49322.65 00:15:16.025 clat (msec): min=31, max=2874, avg=930.06, stdev=642.03 00:15:16.025 lat (msec): min=32, max=4160, avg=937.99, stdev=647.80 00:15:16.025 clat percentiles (msec): 00:15:16.025 | 1.00th=[ 89], 5.00th=[ 372], 10.00th=[ 397], 20.00th=[ 460], 00:15:16.025 | 30.00th=[ 531], 40.00th=[ 592], 50.00th=[ 735], 60.00th=[ 785], 00:15:16.025 | 70.00th=[ 877], 80.00th=[ 1821], 90.00th=[ 2165], 95.00th=[ 2366], 00:15:16.025 | 99.00th=[ 2467], 99.50th=[ 2500], 99.90th=[ 2869], 99.95th=[ 2869], 00:15:16.025 | 99.99th=[ 2869] 00:15:16.025 bw ( KiB/s): min= 6144, max=301056, per=5.29%, avg=154812.13, stdev=83116.62, samples=15 00:15:16.025 iops : min= 6, max= 294, avg=151.13, stdev=81.19, samples=15 00:15:16.025 lat (msec) : 50=0.40%, 100=0.79%, 250=1.90%, 500=21.81%, 750=28.07% 00:15:16.025 lat (msec) : 1000=21.33%, 2000=15.23%, >=2000=10.47% 00:15:16.025 cpu : usr=0.08%, sys=1.23%, ctx=1954, majf=0, minf=32769 00:15:16.025 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:15:16.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.025 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.025 issued rwts: total=1261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.025 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.025 job5: (groupid=0, jobs=1): err= 0: pid=3891822: Tue Nov 26 21:12:01 2024 00:15:16.025 read: IOPS=50, BW=50.8MiB/s (53.2MB/s)(510MiB/10048msec) 00:15:16.025 slat (usec): min=67, max=2110.6k, avg=19679.51, stdev=146129.93 00:15:16.025 clat (msec): min=8, max=7920, avg=2317.23, stdev=2542.41 00:15:16.025 lat (msec): min=54, max=7929, avg=2336.90, stdev=2549.05 00:15:16.025 clat percentiles (msec): 00:15:16.025 | 1.00th=[ 155], 5.00th=[ 380], 10.00th=[ 430], 20.00th=[ 481], 00:15:16.025 | 30.00th=[ 550], 40.00th=[ 776], 50.00th=[ 1116], 60.00th=[ 1385], 00:15:16.025 | 70.00th=[ 1653], 80.00th=[ 6275], 90.00th=[ 6678], 95.00th=[ 7282], 00:15:16.025 | 99.00th=[ 7349], 99.50th=[ 7416], 99.90th=[ 7953], 99.95th=[ 7953], 00:15:16.025 | 99.99th=[ 7953] 00:15:16.025 bw ( KiB/s): min= 2048, max=260096, per=3.20%, avg=93696.00, stdev=92719.07, samples=8 00:15:16.025 iops : min= 2, max= 254, avg=91.50, stdev=90.55, samples=8 00:15:16.025 lat (msec) : 10=0.20%, 100=0.78%, 250=0.39%, 500=21.18%, 750=16.86% 00:15:16.025 lat (msec) : 1000=7.65%, 2000=26.67%, >=2000=26.27% 00:15:16.025 cpu : usr=0.04%, sys=0.96%, ctx=809, majf=0, minf=32769 00:15:16.025 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:15:16.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.025 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:15:16.025 issued rwts: total=510,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.025 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.025 job5: (groupid=0, jobs=1): err= 0: pid=3891823: Tue Nov 26 21:12:01 2024 00:15:16.025 read: IOPS=97, BW=97.7MiB/s (102MB/s)(1057MiB/10814msec) 00:15:16.025 slat (usec): min=31, max=2108.7k, avg=10113.92, stdev=90216.05 00:15:16.025 clat (msec): min=118, max=2984, avg=1010.93, stdev=674.06 00:15:16.025 lat (msec): min=352, max=2988, avg=1021.05, stdev=677.06 00:15:16.025 clat percentiles (msec): 00:15:16.025 | 1.00th=[ 359], 5.00th=[ 384], 10.00th=[ 414], 20.00th=[ 592], 00:15:16.025 | 30.00th=[ 659], 40.00th=[ 718], 50.00th=[ 776], 60.00th=[ 860], 00:15:16.025 | 70.00th=[ 1053], 80.00th=[ 1217], 90.00th=[ 2433], 95.00th=[ 2769], 00:15:16.025 | 99.00th=[ 2937], 99.50th=[ 2970], 99.90th=[ 2970], 99.95th=[ 2970], 00:15:16.025 | 99.99th=[ 2970] 00:15:16.025 bw ( KiB/s): min= 2048, max=307200, per=4.65%, avg=136045.71, stdev=82573.99, samples=14 00:15:16.025 iops : min= 2, max= 300, avg=132.86, stdev=80.64, samples=14 00:15:16.025 lat (msec) : 250=0.09%, 500=14.66%, 750=29.52%, 1000=23.94%, 2000=19.21% 00:15:16.025 lat (msec) : >=2000=12.58% 00:15:16.025 cpu : usr=0.05%, sys=1.24%, ctx=2117, majf=0, minf=32769 00:15:16.025 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.0%, >=64=94.0% 00:15:16.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.025 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.025 issued rwts: total=1057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.025 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.025 job5: (groupid=0, jobs=1): err= 0: pid=3891824: Tue Nov 26 21:12:01 2024 00:15:16.025 read: IOPS=23, BW=23.0MiB/s (24.1MB/s)(231MiB/10042msec) 00:15:16.025 slat (usec): min=351, max=2099.2k, avg=43430.98, stdev=269914.20 00:15:16.025 clat (msec): min=8, max=9078, avg=1130.71, stdev=1604.39 00:15:16.025 lat (msec): min=51, max=9114, avg=1174.14, stdev=1685.87 00:15:16.025 clat percentiles (msec): 00:15:16.025 | 1.00th=[ 54], 5.00th=[ 167], 10.00th=[ 376], 20.00th=[ 439], 00:15:16.025 | 30.00th=[ 481], 40.00th=[ 743], 50.00th=[ 818], 60.00th=[ 860], 00:15:16.025 | 70.00th=[ 902], 80.00th=[ 953], 90.00th=[ 1020], 95.00th=[ 4799], 00:15:16.025 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:15:16.025 | 99.99th=[ 9060] 00:15:16.025 bw ( KiB/s): min=145408, max=145408, per=4.97%, avg=145408.00, stdev= 0.00, samples=1 00:15:16.025 iops : min= 142, max= 142, avg=142.00, stdev= 0.00, samples=1 00:15:16.025 lat (msec) : 10=0.43%, 100=2.60%, 250=2.16%, 500=27.27%, 750=9.52% 00:15:16.025 lat (msec) : 1000=45.45%, 2000=3.03%, >=2000=9.52% 00:15:16.025 cpu : usr=0.00%, sys=0.71%, ctx=513, majf=0, minf=32769 00:15:16.025 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.5%, 16=6.9%, 32=13.9%, >=64=72.7% 00:15:16.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.025 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:15:16.025 issued rwts: total=231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.025 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.025 job5: (groupid=0, jobs=1): err= 0: pid=3891825: Tue Nov 26 21:12:01 2024 00:15:16.025 read: IOPS=108, BW=109MiB/s (114MB/s)(1089MiB/10012msec) 00:15:16.025 slat (usec): min=43, max=2040.6k, avg=9180.44, stdev=112017.10 00:15:16.025 clat (msec): min=11, max=7863, avg=1132.01, stdev=2284.67 00:15:16.025 lat (msec): min=11, max=7865, avg=1141.19, stdev=2293.67 00:15:16.025 clat percentiles (msec): 00:15:16.025 | 1.00th=[ 21], 5.00th=[ 63], 10.00th=[ 115], 20.00th=[ 197], 00:15:16.026 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 279], 00:15:16.026 | 70.00th=[ 384], 80.00th=[ 393], 90.00th=[ 5738], 95.00th=[ 7819], 00:15:16.026 | 99.00th=[ 7886], 99.50th=[ 7886], 99.90th=[ 7886], 99.95th=[ 7886], 00:15:16.026 | 99.99th=[ 7886] 00:15:16.026 bw ( KiB/s): min= 2048, max=491520, per=5.55%, avg=162560.00, stdev=202883.28, samples=8 00:15:16.026 iops : min= 2, max= 480, avg=158.75, stdev=198.13, samples=8 00:15:16.026 lat (msec) : 20=1.01%, 50=2.85%, 100=4.68%, 250=37.28%, 500=39.03% 00:15:16.026 lat (msec) : 750=0.18%, 2000=1.38%, >=2000=13.59% 00:15:16.026 cpu : usr=0.02%, sys=1.40%, ctx=1390, majf=0, minf=32769 00:15:16.026 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:15:16.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.026 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.026 issued rwts: total=1089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.026 job5: (groupid=0, jobs=1): err= 0: pid=3891826: Tue Nov 26 21:12:01 2024 00:15:16.026 read: IOPS=104, BW=105MiB/s (110MB/s)(1047MiB/10012msec) 00:15:16.026 slat (usec): min=86, max=2069.6k, avg=9549.34, stdev=125628.49 00:15:16.026 clat (msec): min=11, max=8987, avg=445.61, stdev=1467.37 00:15:16.026 lat (msec): min=13, max=9005, avg=455.16, stdev=1491.34 00:15:16.026 clat percentiles (msec): 00:15:16.026 | 1.00th=[ 22], 5.00th=[ 62], 10.00th=[ 112], 20.00th=[ 120], 00:15:16.026 | 30.00th=[ 121], 40.00th=[ 121], 50.00th=[ 121], 60.00th=[ 122], 00:15:16.026 | 70.00th=[ 122], 80.00th=[ 132], 90.00th=[ 355], 95.00th=[ 535], 00:15:16.026 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:15:16.026 | 99.99th=[ 8926] 00:15:16.026 bw ( KiB/s): min=24576, max=800768, per=14.10%, avg=412672.00, stdev=548850.63, samples=2 00:15:16.026 iops : min= 24, max= 782, avg=403.00, stdev=535.99, samples=2 00:15:16.026 lat (msec) : 20=0.86%, 50=2.87%, 100=5.16%, 250=77.94%, 500=7.83% 00:15:16.026 lat (msec) : 750=0.48%, >=2000=4.87% 00:15:16.026 cpu : usr=0.00%, sys=1.20%, ctx=1150, majf=0, minf=32769 00:15:16.026 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.5%, 32=3.1%, >=64=94.0% 00:15:16.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.026 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:16.026 issued rwts: total=1047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.026 job5: (groupid=0, jobs=1): err= 0: pid=3891827: Tue Nov 26 21:12:01 2024 00:15:16.026 read: IOPS=14, BW=14.1MiB/s (14.7MB/s)(142MiB/10103msec) 00:15:16.026 slat (usec): min=386, max=2068.1k, avg=70750.05, stdev=331829.18 00:15:16.026 clat (msec): min=55, max=10063, avg=4522.02, stdev=4285.00 00:15:16.026 lat (msec): min=148, max=10076, avg=4592.77, stdev=4293.66 00:15:16.026 clat percentiles (msec): 00:15:16.026 | 1.00th=[ 148], 5.00th=[ 271], 10.00th=[ 481], 20.00th=[ 693], 00:15:16.026 | 30.00th=[ 919], 40.00th=[ 1133], 50.00th=[ 1267], 60.00th=[ 5738], 00:15:16.026 | 70.00th=[ 9731], 80.00th=[ 9866], 90.00th=[10000], 95.00th=[10000], 00:15:16.026 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:15:16.026 | 99.99th=[10000] 00:15:16.026 bw ( KiB/s): min=28672, max=28672, per=0.98%, avg=28672.00, stdev= 0.00, samples=1 00:15:16.026 iops : min= 28, max= 28, avg=28.00, stdev= 0.00, samples=1 00:15:16.026 lat (msec) : 100=0.70%, 250=2.11%, 500=9.86%, 750=9.86%, 1000=11.27% 00:15:16.026 lat (msec) : 2000=21.13%, >=2000=45.07% 00:15:16.026 cpu : usr=0.00%, sys=0.83%, ctx=304, majf=0, minf=32769 00:15:16.026 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=5.6%, 16=11.3%, 32=22.5%, >=64=55.6% 00:15:16.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:16.026 complete : 0=0.0%, 4=93.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=6.2% 00:15:16.026 issued rwts: total=142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:16.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:16.026 00:15:16.026 Run status group 0 (all jobs): 00:15:16.026 READ: bw=2859MiB/s (2998MB/s), 795KiB/s-193MiB/s (814kB/s-202MB/s), io=36.2GiB (38.8GB), run=10009-12952msec 00:15:16.026 00:15:16.026 Disk stats (read/write): 00:15:16.026 nvme0n1: ios=33405/0, merge=0/0, ticks=8704663/0, in_queue=8704663, util=98.64% 00:15:16.026 nvme1n1: ios=76653/0, merge=0/0, ticks=12458542/0, in_queue=12458542, util=99.04% 00:15:16.026 nvme2n1: ios=27921/0, merge=0/0, ticks=7334335/0, in_queue=7334335, util=99.10% 00:15:16.026 nvme3n1: ios=18680/0, merge=0/0, ticks=9025596/0, in_queue=9025596, util=99.14% 00:15:16.026 nvme4n1: ios=59295/0, merge=0/0, ticks=8766355/0, in_queue=8766355, util=99.18% 00:15:16.026 nvme5n1: ios=79307/0, merge=0/0, ticks=9373707/0, in_queue=9373707, util=99.09% 00:15:16.026 21:12:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:15:16.026 21:12:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:15:16.026 21:12:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:16.026 21:12:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:15:16.026 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:16.026 21:12:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:16.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:16.593 21:12:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:15:17.527 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:15:17.527 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:15:17.527 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:17.527 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:17.527 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:15:17.786 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:17.786 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:15:17.786 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:17.786 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:17.786 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.786 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:17.786 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.786 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:17.786 21:12:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:15:18.720 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:15:18.720 21:12:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:15:18.720 21:12:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:18.720 21:12:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:18.720 21:12:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:15:18.720 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:18.720 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:15:18.720 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:18.720 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:18.720 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.720 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:18.720 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.720 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:18.720 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:15:19.655 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:15:19.655 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:15:19.655 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:19.655 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:19.655 21:12:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:15:19.655 21:12:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:19.655 21:12:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:15:19.655 21:12:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:19.655 21:12:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:15:19.655 21:12:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.655 21:12:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:19.655 21:12:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.655 21:12:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:15:19.655 21:12:07 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:15:20.587 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:15:20.587 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:15:20.587 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:15:20.587 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:20.587 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:15:20.588 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:20.588 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:20.846 rmmod nvme_rdma 00:15:20.846 rmmod nvme_fabrics 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 3890369 ']' 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 3890369 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 3890369 ']' 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 3890369 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3890369 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3890369' 00:15:20.846 killing process with pid 3890369 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 3890369 00:15:20.846 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 3890369 00:15:21.105 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:21.105 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:21.105 00:15:21.105 real 0m33.051s 00:15:21.105 user 1m55.516s 00:15:21.105 sys 0m13.956s 00:15:21.105 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.105 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:15:21.105 ************************************ 00:15:21.105 END TEST nvmf_srq_overwhelm 00:15:21.105 ************************************ 00:15:21.105 21:12:08 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:15:21.105 21:12:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:21.105 21:12:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.105 21:12:08 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.363 ************************************ 00:15:21.363 START TEST nvmf_shutdown 00:15:21.363 ************************************ 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:15:21.363 * Looking for test storage... 00:15:21.363 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:21.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.363 --rc genhtml_branch_coverage=1 00:15:21.363 --rc genhtml_function_coverage=1 00:15:21.363 --rc genhtml_legend=1 00:15:21.363 --rc geninfo_all_blocks=1 00:15:21.363 --rc geninfo_unexecuted_blocks=1 00:15:21.363 00:15:21.363 ' 00:15:21.363 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:21.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.364 --rc genhtml_branch_coverage=1 00:15:21.364 --rc genhtml_function_coverage=1 00:15:21.364 --rc genhtml_legend=1 00:15:21.364 --rc geninfo_all_blocks=1 00:15:21.364 --rc geninfo_unexecuted_blocks=1 00:15:21.364 00:15:21.364 ' 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:21.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.364 --rc genhtml_branch_coverage=1 00:15:21.364 --rc genhtml_function_coverage=1 00:15:21.364 --rc genhtml_legend=1 00:15:21.364 --rc geninfo_all_blocks=1 00:15:21.364 --rc geninfo_unexecuted_blocks=1 00:15:21.364 00:15:21.364 ' 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:21.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:21.364 --rc genhtml_branch_coverage=1 00:15:21.364 --rc genhtml_function_coverage=1 00:15:21.364 --rc genhtml_legend=1 00:15:21.364 --rc geninfo_all_blocks=1 00:15:21.364 --rc geninfo_unexecuted_blocks=1 00:15:21.364 00:15:21.364 ' 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:21.364 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:21.364 ************************************ 00:15:21.364 START TEST nvmf_shutdown_tc1 00:15:21.364 ************************************ 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:15:21.364 21:12:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:27.921 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:27.921 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:27.921 Found net devices under 0000:18:00.0: mlx_0_0 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:27.921 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:27.922 Found net devices under 0000:18:00.1: mlx_0_1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:27.922 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:27.922 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:27.922 altname enp24s0f0np0 00:15:27.922 altname ens785f0np0 00:15:27.922 inet 192.168.100.8/24 scope global mlx_0_0 00:15:27.922 valid_lft forever preferred_lft forever 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:27.922 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:27.922 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:27.922 altname enp24s0f1np1 00:15:27.922 altname ens785f1np1 00:15:27.922 inet 192.168.100.9/24 scope global mlx_0_1 00:15:27.922 valid_lft forever preferred_lft forever 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:27.922 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:27.923 192.168.100.9' 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:27.923 192.168.100.9' 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:27.923 192.168.100.9' 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3899294 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3899294 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3899294 ']' 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:27.923 [2024-11-26 21:12:14.540805] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:15:27.923 [2024-11-26 21:12:14.540849] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.923 [2024-11-26 21:12:14.599257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:27.923 [2024-11-26 21:12:14.638160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.923 [2024-11-26 21:12:14.638194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.923 [2024-11-26 21:12:14.638202] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.923 [2024-11-26 21:12:14.638208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.923 [2024-11-26 21:12:14.638212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.923 [2024-11-26 21:12:14.639463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:27.923 [2024-11-26 21:12:14.639544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:27.923 [2024-11-26 21:12:14.639668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:27.923 [2024-11-26 21:12:14.639670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:27.923 [2024-11-26 21:12:14.804237] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xae8830/0xaecd20) succeed. 00:15:27.923 [2024-11-26 21:12:14.812375] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xae9ec0/0xb2e3c0) succeed. 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.923 21:12:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:27.923 Malloc1 00:15:27.923 [2024-11-26 21:12:15.029419] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:27.923 Malloc2 00:15:27.923 Malloc3 00:15:27.923 Malloc4 00:15:27.923 Malloc5 00:15:27.923 Malloc6 00:15:27.923 Malloc7 00:15:27.923 Malloc8 00:15:28.181 Malloc9 00:15:28.181 Malloc10 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3899517 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3899517 /var/tmp/bdevperf.sock 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3899517 ']' 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:28.181 { 00:15:28.181 "params": { 00:15:28.181 "name": "Nvme$subsystem", 00:15:28.181 "trtype": "$TEST_TRANSPORT", 00:15:28.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.181 "adrfam": "ipv4", 00:15:28.181 "trsvcid": "$NVMF_PORT", 00:15:28.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.181 "hdgst": ${hdgst:-false}, 00:15:28.181 "ddgst": ${ddgst:-false} 00:15:28.181 }, 00:15:28.181 "method": "bdev_nvme_attach_controller" 00:15:28.181 } 00:15:28.181 EOF 00:15:28.181 )") 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:28.181 { 00:15:28.181 "params": { 00:15:28.181 "name": "Nvme$subsystem", 00:15:28.181 "trtype": "$TEST_TRANSPORT", 00:15:28.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.181 "adrfam": "ipv4", 00:15:28.181 "trsvcid": "$NVMF_PORT", 00:15:28.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.181 "hdgst": ${hdgst:-false}, 00:15:28.181 "ddgst": ${ddgst:-false} 00:15:28.181 }, 00:15:28.181 "method": "bdev_nvme_attach_controller" 00:15:28.181 } 00:15:28.181 EOF 00:15:28.181 )") 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:28.181 { 00:15:28.181 "params": { 00:15:28.181 "name": "Nvme$subsystem", 00:15:28.181 "trtype": "$TEST_TRANSPORT", 00:15:28.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.181 "adrfam": "ipv4", 00:15:28.181 "trsvcid": "$NVMF_PORT", 00:15:28.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.181 "hdgst": ${hdgst:-false}, 00:15:28.181 "ddgst": ${ddgst:-false} 00:15:28.181 }, 00:15:28.181 "method": "bdev_nvme_attach_controller" 00:15:28.181 } 00:15:28.181 EOF 00:15:28.181 )") 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:28.181 { 00:15:28.181 "params": { 00:15:28.181 "name": "Nvme$subsystem", 00:15:28.181 "trtype": "$TEST_TRANSPORT", 00:15:28.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.181 "adrfam": "ipv4", 00:15:28.181 "trsvcid": "$NVMF_PORT", 00:15:28.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.181 "hdgst": ${hdgst:-false}, 00:15:28.181 "ddgst": ${ddgst:-false} 00:15:28.181 }, 00:15:28.181 "method": "bdev_nvme_attach_controller" 00:15:28.181 } 00:15:28.181 EOF 00:15:28.181 )") 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:28.181 { 00:15:28.181 "params": { 00:15:28.181 "name": "Nvme$subsystem", 00:15:28.181 "trtype": "$TEST_TRANSPORT", 00:15:28.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.181 "adrfam": "ipv4", 00:15:28.181 "trsvcid": "$NVMF_PORT", 00:15:28.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.181 "hdgst": ${hdgst:-false}, 00:15:28.181 "ddgst": ${ddgst:-false} 00:15:28.181 }, 00:15:28.181 "method": "bdev_nvme_attach_controller" 00:15:28.181 } 00:15:28.181 EOF 00:15:28.181 )") 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:28.181 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:28.181 { 00:15:28.181 "params": { 00:15:28.181 "name": "Nvme$subsystem", 00:15:28.181 "trtype": "$TEST_TRANSPORT", 00:15:28.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.181 "adrfam": "ipv4", 00:15:28.181 "trsvcid": "$NVMF_PORT", 00:15:28.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.181 "hdgst": ${hdgst:-false}, 00:15:28.182 "ddgst": ${ddgst:-false} 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 } 00:15:28.182 EOF 00:15:28.182 )") 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:28.182 { 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme$subsystem", 00:15:28.182 "trtype": "$TEST_TRANSPORT", 00:15:28.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "$NVMF_PORT", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.182 "hdgst": ${hdgst:-false}, 00:15:28.182 "ddgst": ${ddgst:-false} 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 } 00:15:28.182 EOF 00:15:28.182 )") 00:15:28.182 [2024-11-26 21:12:15.511095] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:15:28.182 [2024-11-26 21:12:15.511144] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:28.182 { 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme$subsystem", 00:15:28.182 "trtype": "$TEST_TRANSPORT", 00:15:28.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "$NVMF_PORT", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.182 "hdgst": ${hdgst:-false}, 00:15:28.182 "ddgst": ${ddgst:-false} 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 } 00:15:28.182 EOF 00:15:28.182 )") 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:28.182 { 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme$subsystem", 00:15:28.182 "trtype": "$TEST_TRANSPORT", 00:15:28.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "$NVMF_PORT", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.182 "hdgst": ${hdgst:-false}, 00:15:28.182 "ddgst": ${ddgst:-false} 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 } 00:15:28.182 EOF 00:15:28.182 )") 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:28.182 { 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme$subsystem", 00:15:28.182 "trtype": "$TEST_TRANSPORT", 00:15:28.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "$NVMF_PORT", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:28.182 "hdgst": ${hdgst:-false}, 00:15:28.182 "ddgst": ${ddgst:-false} 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 } 00:15:28.182 EOF 00:15:28.182 )") 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:15:28.182 21:12:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme1", 00:15:28.182 "trtype": "rdma", 00:15:28.182 "traddr": "192.168.100.8", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "4420", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:28.182 "hdgst": false, 00:15:28.182 "ddgst": false 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 },{ 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme2", 00:15:28.182 "trtype": "rdma", 00:15:28.182 "traddr": "192.168.100.8", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "4420", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:28.182 "hdgst": false, 00:15:28.182 "ddgst": false 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 },{ 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme3", 00:15:28.182 "trtype": "rdma", 00:15:28.182 "traddr": "192.168.100.8", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "4420", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:28.182 "hdgst": false, 00:15:28.182 "ddgst": false 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 },{ 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme4", 00:15:28.182 "trtype": "rdma", 00:15:28.182 "traddr": "192.168.100.8", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "4420", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:28.182 "hdgst": false, 00:15:28.182 "ddgst": false 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 },{ 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme5", 00:15:28.182 "trtype": "rdma", 00:15:28.182 "traddr": "192.168.100.8", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "4420", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:28.182 "hdgst": false, 00:15:28.182 "ddgst": false 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 },{ 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme6", 00:15:28.182 "trtype": "rdma", 00:15:28.182 "traddr": "192.168.100.8", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "4420", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:28.182 "hdgst": false, 00:15:28.182 "ddgst": false 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 },{ 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme7", 00:15:28.182 "trtype": "rdma", 00:15:28.182 "traddr": "192.168.100.8", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "4420", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:28.182 "hdgst": false, 00:15:28.182 "ddgst": false 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 },{ 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme8", 00:15:28.182 "trtype": "rdma", 00:15:28.182 "traddr": "192.168.100.8", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "4420", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:28.182 "hdgst": false, 00:15:28.182 "ddgst": false 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 },{ 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme9", 00:15:28.182 "trtype": "rdma", 00:15:28.182 "traddr": "192.168.100.8", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "4420", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:28.182 "hdgst": false, 00:15:28.182 "ddgst": false 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 },{ 00:15:28.182 "params": { 00:15:28.182 "name": "Nvme10", 00:15:28.182 "trtype": "rdma", 00:15:28.182 "traddr": "192.168.100.8", 00:15:28.182 "adrfam": "ipv4", 00:15:28.182 "trsvcid": "4420", 00:15:28.182 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:28.182 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:28.182 "hdgst": false, 00:15:28.182 "ddgst": false 00:15:28.182 }, 00:15:28.182 "method": "bdev_nvme_attach_controller" 00:15:28.182 }' 00:15:28.182 [2024-11-26 21:12:15.572393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.182 [2024-11-26 21:12:15.609882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.111 21:12:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.111 21:12:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:15:29.111 21:12:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:29.111 21:12:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.111 21:12:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:29.111 21:12:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.111 21:12:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3899517 00:15:29.111 21:12:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:15:29.111 21:12:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:15:30.478 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3899517 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:15:30.478 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3899294 00:15:30.478 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:30.478 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:30.478 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:15:30.478 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:30.479 { 00:15:30.479 "params": { 00:15:30.479 "name": "Nvme$subsystem", 00:15:30.479 "trtype": "$TEST_TRANSPORT", 00:15:30.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.479 "adrfam": "ipv4", 00:15:30.479 "trsvcid": "$NVMF_PORT", 00:15:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.479 "hdgst": ${hdgst:-false}, 00:15:30.479 "ddgst": ${ddgst:-false} 00:15:30.479 }, 00:15:30.479 "method": "bdev_nvme_attach_controller" 00:15:30.479 } 00:15:30.479 EOF 00:15:30.479 )") 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:30.479 { 00:15:30.479 "params": { 00:15:30.479 "name": "Nvme$subsystem", 00:15:30.479 "trtype": "$TEST_TRANSPORT", 00:15:30.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.479 "adrfam": "ipv4", 00:15:30.479 "trsvcid": "$NVMF_PORT", 00:15:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.479 "hdgst": ${hdgst:-false}, 00:15:30.479 "ddgst": ${ddgst:-false} 00:15:30.479 }, 00:15:30.479 "method": "bdev_nvme_attach_controller" 00:15:30.479 } 00:15:30.479 EOF 00:15:30.479 )") 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:30.479 { 00:15:30.479 "params": { 00:15:30.479 "name": "Nvme$subsystem", 00:15:30.479 "trtype": "$TEST_TRANSPORT", 00:15:30.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.479 "adrfam": "ipv4", 00:15:30.479 "trsvcid": "$NVMF_PORT", 00:15:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.479 "hdgst": ${hdgst:-false}, 00:15:30.479 "ddgst": ${ddgst:-false} 00:15:30.479 }, 00:15:30.479 "method": "bdev_nvme_attach_controller" 00:15:30.479 } 00:15:30.479 EOF 00:15:30.479 )") 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:30.479 { 00:15:30.479 "params": { 00:15:30.479 "name": "Nvme$subsystem", 00:15:30.479 "trtype": "$TEST_TRANSPORT", 00:15:30.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.479 "adrfam": "ipv4", 00:15:30.479 "trsvcid": "$NVMF_PORT", 00:15:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.479 "hdgst": ${hdgst:-false}, 00:15:30.479 "ddgst": ${ddgst:-false} 00:15:30.479 }, 00:15:30.479 "method": "bdev_nvme_attach_controller" 00:15:30.479 } 00:15:30.479 EOF 00:15:30.479 )") 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:30.479 { 00:15:30.479 "params": { 00:15:30.479 "name": "Nvme$subsystem", 00:15:30.479 "trtype": "$TEST_TRANSPORT", 00:15:30.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.479 "adrfam": "ipv4", 00:15:30.479 "trsvcid": "$NVMF_PORT", 00:15:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.479 "hdgst": ${hdgst:-false}, 00:15:30.479 "ddgst": ${ddgst:-false} 00:15:30.479 }, 00:15:30.479 "method": "bdev_nvme_attach_controller" 00:15:30.479 } 00:15:30.479 EOF 00:15:30.479 )") 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:30.479 { 00:15:30.479 "params": { 00:15:30.479 "name": "Nvme$subsystem", 00:15:30.479 "trtype": "$TEST_TRANSPORT", 00:15:30.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.479 "adrfam": "ipv4", 00:15:30.479 "trsvcid": "$NVMF_PORT", 00:15:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.479 "hdgst": ${hdgst:-false}, 00:15:30.479 "ddgst": ${ddgst:-false} 00:15:30.479 }, 00:15:30.479 "method": "bdev_nvme_attach_controller" 00:15:30.479 } 00:15:30.479 EOF 00:15:30.479 )") 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:30.479 [2024-11-26 21:12:17.518494] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:15:30.479 [2024-11-26 21:12:17.518543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899897 ] 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:30.479 { 00:15:30.479 "params": { 00:15:30.479 "name": "Nvme$subsystem", 00:15:30.479 "trtype": "$TEST_TRANSPORT", 00:15:30.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.479 "adrfam": "ipv4", 00:15:30.479 "trsvcid": "$NVMF_PORT", 00:15:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.479 "hdgst": ${hdgst:-false}, 00:15:30.479 "ddgst": ${ddgst:-false} 00:15:30.479 }, 00:15:30.479 "method": "bdev_nvme_attach_controller" 00:15:30.479 } 00:15:30.479 EOF 00:15:30.479 )") 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:30.479 { 00:15:30.479 "params": { 00:15:30.479 "name": "Nvme$subsystem", 00:15:30.479 "trtype": "$TEST_TRANSPORT", 00:15:30.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.479 "adrfam": "ipv4", 00:15:30.479 "trsvcid": "$NVMF_PORT", 00:15:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.479 "hdgst": ${hdgst:-false}, 00:15:30.479 "ddgst": ${ddgst:-false} 00:15:30.479 }, 00:15:30.479 "method": "bdev_nvme_attach_controller" 00:15:30.479 } 00:15:30.479 EOF 00:15:30.479 )") 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:30.479 { 00:15:30.479 "params": { 00:15:30.479 "name": "Nvme$subsystem", 00:15:30.479 "trtype": "$TEST_TRANSPORT", 00:15:30.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.479 "adrfam": "ipv4", 00:15:30.479 "trsvcid": "$NVMF_PORT", 00:15:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.479 "hdgst": ${hdgst:-false}, 00:15:30.479 "ddgst": ${ddgst:-false} 00:15:30.479 }, 00:15:30.479 "method": "bdev_nvme_attach_controller" 00:15:30.479 } 00:15:30.479 EOF 00:15:30.479 )") 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:30.479 { 00:15:30.479 "params": { 00:15:30.479 "name": "Nvme$subsystem", 00:15:30.479 "trtype": "$TEST_TRANSPORT", 00:15:30.479 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:30.479 "adrfam": "ipv4", 00:15:30.479 "trsvcid": "$NVMF_PORT", 00:15:30.479 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:30.479 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:30.479 "hdgst": ${hdgst:-false}, 00:15:30.479 "ddgst": ${ddgst:-false} 00:15:30.479 }, 00:15:30.479 "method": "bdev_nvme_attach_controller" 00:15:30.479 } 00:15:30.479 EOF 00:15:30.479 )") 00:15:30.479 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:15:30.480 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:15:30.480 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:15:30.480 21:12:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:30.480 "params": { 00:15:30.480 "name": "Nvme1", 00:15:30.480 "trtype": "rdma", 00:15:30.480 "traddr": "192.168.100.8", 00:15:30.480 "adrfam": "ipv4", 00:15:30.480 "trsvcid": "4420", 00:15:30.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.480 "hdgst": false, 00:15:30.480 "ddgst": false 00:15:30.480 }, 00:15:30.480 "method": "bdev_nvme_attach_controller" 00:15:30.480 },{ 00:15:30.480 "params": { 00:15:30.480 "name": "Nvme2", 00:15:30.480 "trtype": "rdma", 00:15:30.480 "traddr": "192.168.100.8", 00:15:30.480 "adrfam": "ipv4", 00:15:30.480 "trsvcid": "4420", 00:15:30.480 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:30.480 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:30.480 "hdgst": false, 00:15:30.480 "ddgst": false 00:15:30.480 }, 00:15:30.480 "method": "bdev_nvme_attach_controller" 00:15:30.480 },{ 00:15:30.480 "params": { 00:15:30.480 "name": "Nvme3", 00:15:30.480 "trtype": "rdma", 00:15:30.480 "traddr": "192.168.100.8", 00:15:30.480 "adrfam": "ipv4", 00:15:30.480 "trsvcid": "4420", 00:15:30.480 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:30.480 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:30.480 "hdgst": false, 00:15:30.480 "ddgst": false 00:15:30.480 }, 00:15:30.480 "method": "bdev_nvme_attach_controller" 00:15:30.480 },{ 00:15:30.480 "params": { 00:15:30.480 "name": "Nvme4", 00:15:30.480 "trtype": "rdma", 00:15:30.480 "traddr": "192.168.100.8", 00:15:30.480 "adrfam": "ipv4", 00:15:30.480 "trsvcid": "4420", 00:15:30.480 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:30.480 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:30.480 "hdgst": false, 00:15:30.480 "ddgst": false 00:15:30.480 }, 00:15:30.480 "method": "bdev_nvme_attach_controller" 00:15:30.480 },{ 00:15:30.480 "params": { 00:15:30.480 "name": "Nvme5", 00:15:30.480 "trtype": "rdma", 00:15:30.480 "traddr": "192.168.100.8", 00:15:30.480 "adrfam": "ipv4", 00:15:30.480 "trsvcid": "4420", 00:15:30.480 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:30.480 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:30.480 "hdgst": false, 00:15:30.480 "ddgst": false 00:15:30.480 }, 00:15:30.480 "method": "bdev_nvme_attach_controller" 00:15:30.480 },{ 00:15:30.480 "params": { 00:15:30.480 "name": "Nvme6", 00:15:30.480 "trtype": "rdma", 00:15:30.480 "traddr": "192.168.100.8", 00:15:30.480 "adrfam": "ipv4", 00:15:30.480 "trsvcid": "4420", 00:15:30.480 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:30.480 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:30.480 "hdgst": false, 00:15:30.480 "ddgst": false 00:15:30.480 }, 00:15:30.480 "method": "bdev_nvme_attach_controller" 00:15:30.480 },{ 00:15:30.480 "params": { 00:15:30.480 "name": "Nvme7", 00:15:30.480 "trtype": "rdma", 00:15:30.480 "traddr": "192.168.100.8", 00:15:30.480 "adrfam": "ipv4", 00:15:30.480 "trsvcid": "4420", 00:15:30.480 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:30.480 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:30.480 "hdgst": false, 00:15:30.480 "ddgst": false 00:15:30.480 }, 00:15:30.480 "method": "bdev_nvme_attach_controller" 00:15:30.480 },{ 00:15:30.480 "params": { 00:15:30.480 "name": "Nvme8", 00:15:30.480 "trtype": "rdma", 00:15:30.480 "traddr": "192.168.100.8", 00:15:30.480 "adrfam": "ipv4", 00:15:30.480 "trsvcid": "4420", 00:15:30.480 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:30.480 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:30.480 "hdgst": false, 00:15:30.480 "ddgst": false 00:15:30.480 }, 00:15:30.480 "method": "bdev_nvme_attach_controller" 00:15:30.480 },{ 00:15:30.480 "params": { 00:15:30.480 "name": "Nvme9", 00:15:30.480 "trtype": "rdma", 00:15:30.480 "traddr": "192.168.100.8", 00:15:30.480 "adrfam": "ipv4", 00:15:30.480 "trsvcid": "4420", 00:15:30.480 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:30.480 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:30.480 "hdgst": false, 00:15:30.480 "ddgst": false 00:15:30.480 }, 00:15:30.480 "method": "bdev_nvme_attach_controller" 00:15:30.480 },{ 00:15:30.480 "params": { 00:15:30.480 "name": "Nvme10", 00:15:30.480 "trtype": "rdma", 00:15:30.480 "traddr": "192.168.100.8", 00:15:30.480 "adrfam": "ipv4", 00:15:30.480 "trsvcid": "4420", 00:15:30.480 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:30.480 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:30.480 "hdgst": false, 00:15:30.480 "ddgst": false 00:15:30.480 }, 00:15:30.480 "method": "bdev_nvme_attach_controller" 00:15:30.480 }' 00:15:30.480 [2024-11-26 21:12:17.579353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.480 [2024-11-26 21:12:17.617263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.044 Running I/O for 1 seconds... 00:15:32.416 3652.00 IOPS, 228.25 MiB/s 00:15:32.416 Latency(us) 00:15:32.416 [2024-11-26T20:12:19.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.416 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:32.416 Verification LBA range: start 0x0 length 0x400 00:15:32.416 Nvme1n1 : 1.16 385.73 24.11 0.00 0.00 160892.07 24175.50 184083.34 00:15:32.416 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:32.416 Verification LBA range: start 0x0 length 0x400 00:15:32.416 Nvme2n1 : 1.16 385.29 24.08 0.00 0.00 160046.97 27379.48 170879.05 00:15:32.416 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:32.416 Verification LBA range: start 0x0 length 0x400 00:15:32.416 Nvme3n1 : 1.16 393.52 24.59 0.00 0.00 152653.01 5072.97 160781.65 00:15:32.416 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:32.416 Verification LBA range: start 0x0 length 0x400 00:15:32.416 Nvme4n1 : 1.16 407.78 25.49 0.00 0.00 144819.74 5849.69 125829.12 00:15:32.416 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:32.416 Verification LBA range: start 0x0 length 0x400 00:15:32.416 Nvme5n1 : 1.17 384.33 24.02 0.00 0.00 150090.23 20971.52 146023.92 00:15:32.416 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:32.416 Verification LBA range: start 0x0 length 0x400 00:15:32.416 Nvme6n1 : 1.17 390.95 24.43 0.00 0.00 144664.23 10679.94 114178.28 00:15:32.416 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:32.416 Verification LBA range: start 0x0 length 0x400 00:15:32.416 Nvme7n1 : 1.17 436.95 27.31 0.00 0.00 133940.24 3810.80 107964.49 00:15:32.416 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:32.416 Verification LBA range: start 0x0 length 0x400 00:15:32.416 Nvme8n1 : 1.17 436.41 27.28 0.00 0.00 132468.74 4490.43 100973.99 00:15:32.416 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:32.416 Verification LBA range: start 0x0 length 0x400 00:15:32.416 Nvme9n1 : 1.17 435.85 27.24 0.00 0.00 130917.50 5315.70 90488.23 00:15:32.416 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:32.416 Verification LBA range: start 0x0 length 0x400 00:15:32.416 Nvme10n1 : 1.18 380.90 23.81 0.00 0.00 147672.39 6165.24 185636.79 00:15:32.416 [2024-11-26T20:12:19.852Z] =================================================================================================================== 00:15:32.416 [2024-11-26T20:12:19.852Z] Total : 4037.69 252.36 0.00 0.00 145278.96 3810.80 185636.79 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:32.675 rmmod nvme_rdma 00:15:32.675 rmmod nvme_fabrics 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3899294 ']' 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3899294 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3899294 ']' 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3899294 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:32.675 21:12:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3899294 00:15:32.675 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:32.675 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:32.675 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3899294' 00:15:32.675 killing process with pid 3899294 00:15:32.675 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3899294 00:15:32.675 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3899294 00:15:33.246 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:33.246 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:33.246 00:15:33.246 real 0m11.663s 00:15:33.246 user 0m27.349s 00:15:33.246 sys 0m5.240s 00:15:33.246 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:33.246 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:15:33.246 ************************************ 00:15:33.246 END TEST nvmf_shutdown_tc1 00:15:33.246 ************************************ 00:15:33.246 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:15:33.246 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:33.246 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.246 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:33.246 ************************************ 00:15:33.246 START TEST nvmf_shutdown_tc2 00:15:33.246 ************************************ 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:33.247 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:33.247 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:33.247 Found net devices under 0000:18:00.0: mlx_0_0 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:33.247 Found net devices under 0000:18:00.1: mlx_0_1 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.247 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:33.248 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:33.248 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:33.248 altname enp24s0f0np0 00:15:33.248 altname ens785f0np0 00:15:33.248 inet 192.168.100.8/24 scope global mlx_0_0 00:15:33.248 valid_lft forever preferred_lft forever 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:33.248 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:33.248 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:33.248 altname enp24s0f1np1 00:15:33.248 altname ens785f1np1 00:15:33.248 inet 192.168.100.9/24 scope global mlx_0_1 00:15:33.248 valid_lft forever preferred_lft forever 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:33.248 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:33.509 192.168.100.9' 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:33.509 192.168.100.9' 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:33.509 192.168.100.9' 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3900536 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3900536 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3900536 ']' 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:33.509 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:33.509 [2024-11-26 21:12:20.795954] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:15:33.509 [2024-11-26 21:12:20.796001] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.509 [2024-11-26 21:12:20.856299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.509 [2024-11-26 21:12:20.895757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.509 [2024-11-26 21:12:20.895792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.509 [2024-11-26 21:12:20.895798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.509 [2024-11-26 21:12:20.895803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.509 [2024-11-26 21:12:20.895808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.509 [2024-11-26 21:12:20.897285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.509 [2024-11-26 21:12:20.897375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.509 [2024-11-26 21:12:20.897480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.509 [2024-11-26 21:12:20.897481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:33.769 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.769 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:15:33.769 21:12:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:33.769 [2024-11-26 21:12:21.058549] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13a7830/0x13abd20) succeed. 00:15:33.769 [2024-11-26 21:12:21.066716] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13a8ec0/0x13ed3c0) succeed. 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:33.769 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:15:34.028 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:15:34.029 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.029 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:34.029 Malloc1 00:15:34.029 [2024-11-26 21:12:21.274762] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:34.029 Malloc2 00:15:34.029 Malloc3 00:15:34.029 Malloc4 00:15:34.029 Malloc5 00:15:34.288 Malloc6 00:15:34.288 Malloc7 00:15:34.288 Malloc8 00:15:34.288 Malloc9 00:15:34.288 Malloc10 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3900837 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3900837 /var/tmp/bdevperf.sock 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3900837 ']' 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:34.288 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:34.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.289 { 00:15:34.289 "params": { 00:15:34.289 "name": "Nvme$subsystem", 00:15:34.289 "trtype": "$TEST_TRANSPORT", 00:15:34.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.289 "adrfam": "ipv4", 00:15:34.289 "trsvcid": "$NVMF_PORT", 00:15:34.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.289 "hdgst": ${hdgst:-false}, 00:15:34.289 "ddgst": ${ddgst:-false} 00:15:34.289 }, 00:15:34.289 "method": "bdev_nvme_attach_controller" 00:15:34.289 } 00:15:34.289 EOF 00:15:34.289 )") 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.289 { 00:15:34.289 "params": { 00:15:34.289 "name": "Nvme$subsystem", 00:15:34.289 "trtype": "$TEST_TRANSPORT", 00:15:34.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.289 "adrfam": "ipv4", 00:15:34.289 "trsvcid": "$NVMF_PORT", 00:15:34.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.289 "hdgst": ${hdgst:-false}, 00:15:34.289 "ddgst": ${ddgst:-false} 00:15:34.289 }, 00:15:34.289 "method": "bdev_nvme_attach_controller" 00:15:34.289 } 00:15:34.289 EOF 00:15:34.289 )") 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.289 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.289 { 00:15:34.289 "params": { 00:15:34.289 "name": "Nvme$subsystem", 00:15:34.289 "trtype": "$TEST_TRANSPORT", 00:15:34.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.289 "adrfam": "ipv4", 00:15:34.289 "trsvcid": "$NVMF_PORT", 00:15:34.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.289 "hdgst": ${hdgst:-false}, 00:15:34.289 "ddgst": ${ddgst:-false} 00:15:34.289 }, 00:15:34.289 "method": "bdev_nvme_attach_controller" 00:15:34.289 } 00:15:34.289 EOF 00:15:34.289 )") 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.549 { 00:15:34.549 "params": { 00:15:34.549 "name": "Nvme$subsystem", 00:15:34.549 "trtype": "$TEST_TRANSPORT", 00:15:34.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.549 "adrfam": "ipv4", 00:15:34.549 "trsvcid": "$NVMF_PORT", 00:15:34.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.549 "hdgst": ${hdgst:-false}, 00:15:34.549 "ddgst": ${ddgst:-false} 00:15:34.549 }, 00:15:34.549 "method": "bdev_nvme_attach_controller" 00:15:34.549 } 00:15:34.549 EOF 00:15:34.549 )") 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.549 { 00:15:34.549 "params": { 00:15:34.549 "name": "Nvme$subsystem", 00:15:34.549 "trtype": "$TEST_TRANSPORT", 00:15:34.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.549 "adrfam": "ipv4", 00:15:34.549 "trsvcid": "$NVMF_PORT", 00:15:34.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.549 "hdgst": ${hdgst:-false}, 00:15:34.549 "ddgst": ${ddgst:-false} 00:15:34.549 }, 00:15:34.549 "method": "bdev_nvme_attach_controller" 00:15:34.549 } 00:15:34.549 EOF 00:15:34.549 )") 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.549 { 00:15:34.549 "params": { 00:15:34.549 "name": "Nvme$subsystem", 00:15:34.549 "trtype": "$TEST_TRANSPORT", 00:15:34.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.549 "adrfam": "ipv4", 00:15:34.549 "trsvcid": "$NVMF_PORT", 00:15:34.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.549 "hdgst": ${hdgst:-false}, 00:15:34.549 "ddgst": ${ddgst:-false} 00:15:34.549 }, 00:15:34.549 "method": "bdev_nvme_attach_controller" 00:15:34.549 } 00:15:34.549 EOF 00:15:34.549 )") 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.549 { 00:15:34.549 "params": { 00:15:34.549 "name": "Nvme$subsystem", 00:15:34.549 "trtype": "$TEST_TRANSPORT", 00:15:34.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.549 "adrfam": "ipv4", 00:15:34.549 "trsvcid": "$NVMF_PORT", 00:15:34.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.549 "hdgst": ${hdgst:-false}, 00:15:34.549 "ddgst": ${ddgst:-false} 00:15:34.549 }, 00:15:34.549 "method": "bdev_nvme_attach_controller" 00:15:34.549 } 00:15:34.549 EOF 00:15:34.549 )") 00:15:34.549 [2024-11-26 21:12:21.747664] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:15:34.549 [2024-11-26 21:12:21.747731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900837 ] 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.549 { 00:15:34.549 "params": { 00:15:34.549 "name": "Nvme$subsystem", 00:15:34.549 "trtype": "$TEST_TRANSPORT", 00:15:34.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.549 "adrfam": "ipv4", 00:15:34.549 "trsvcid": "$NVMF_PORT", 00:15:34.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.549 "hdgst": ${hdgst:-false}, 00:15:34.549 "ddgst": ${ddgst:-false} 00:15:34.549 }, 00:15:34.549 "method": "bdev_nvme_attach_controller" 00:15:34.549 } 00:15:34.549 EOF 00:15:34.549 )") 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.549 { 00:15:34.549 "params": { 00:15:34.549 "name": "Nvme$subsystem", 00:15:34.549 "trtype": "$TEST_TRANSPORT", 00:15:34.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.549 "adrfam": "ipv4", 00:15:34.549 "trsvcid": "$NVMF_PORT", 00:15:34.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.549 "hdgst": ${hdgst:-false}, 00:15:34.549 "ddgst": ${ddgst:-false} 00:15:34.549 }, 00:15:34.549 "method": "bdev_nvme_attach_controller" 00:15:34.549 } 00:15:34.549 EOF 00:15:34.549 )") 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:34.549 { 00:15:34.549 "params": { 00:15:34.549 "name": "Nvme$subsystem", 00:15:34.549 "trtype": "$TEST_TRANSPORT", 00:15:34.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:34.549 "adrfam": "ipv4", 00:15:34.549 "trsvcid": "$NVMF_PORT", 00:15:34.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:34.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:34.549 "hdgst": ${hdgst:-false}, 00:15:34.549 "ddgst": ${ddgst:-false} 00:15:34.549 }, 00:15:34.549 "method": "bdev_nvme_attach_controller" 00:15:34.549 } 00:15:34.549 EOF 00:15:34.549 )") 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:15:34.549 21:12:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:34.549 "params": { 00:15:34.549 "name": "Nvme1", 00:15:34.549 "trtype": "rdma", 00:15:34.550 "traddr": "192.168.100.8", 00:15:34.550 "adrfam": "ipv4", 00:15:34.550 "trsvcid": "4420", 00:15:34.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:34.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:34.550 "hdgst": false, 00:15:34.550 "ddgst": false 00:15:34.550 }, 00:15:34.550 "method": "bdev_nvme_attach_controller" 00:15:34.550 },{ 00:15:34.550 "params": { 00:15:34.550 "name": "Nvme2", 00:15:34.550 "trtype": "rdma", 00:15:34.550 "traddr": "192.168.100.8", 00:15:34.550 "adrfam": "ipv4", 00:15:34.550 "trsvcid": "4420", 00:15:34.550 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:34.550 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:34.550 "hdgst": false, 00:15:34.550 "ddgst": false 00:15:34.550 }, 00:15:34.550 "method": "bdev_nvme_attach_controller" 00:15:34.550 },{ 00:15:34.550 "params": { 00:15:34.550 "name": "Nvme3", 00:15:34.550 "trtype": "rdma", 00:15:34.550 "traddr": "192.168.100.8", 00:15:34.550 "adrfam": "ipv4", 00:15:34.550 "trsvcid": "4420", 00:15:34.550 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:34.550 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:34.550 "hdgst": false, 00:15:34.550 "ddgst": false 00:15:34.550 }, 00:15:34.550 "method": "bdev_nvme_attach_controller" 00:15:34.550 },{ 00:15:34.550 "params": { 00:15:34.550 "name": "Nvme4", 00:15:34.550 "trtype": "rdma", 00:15:34.550 "traddr": "192.168.100.8", 00:15:34.550 "adrfam": "ipv4", 00:15:34.550 "trsvcid": "4420", 00:15:34.550 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:34.550 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:34.550 "hdgst": false, 00:15:34.550 "ddgst": false 00:15:34.550 }, 00:15:34.550 "method": "bdev_nvme_attach_controller" 00:15:34.550 },{ 00:15:34.550 "params": { 00:15:34.550 "name": "Nvme5", 00:15:34.550 "trtype": "rdma", 00:15:34.550 "traddr": "192.168.100.8", 00:15:34.550 "adrfam": "ipv4", 00:15:34.550 "trsvcid": "4420", 00:15:34.550 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:34.550 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:34.550 "hdgst": false, 00:15:34.550 "ddgst": false 00:15:34.550 }, 00:15:34.550 "method": "bdev_nvme_attach_controller" 00:15:34.550 },{ 00:15:34.550 "params": { 00:15:34.550 "name": "Nvme6", 00:15:34.550 "trtype": "rdma", 00:15:34.550 "traddr": "192.168.100.8", 00:15:34.550 "adrfam": "ipv4", 00:15:34.550 "trsvcid": "4420", 00:15:34.550 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:34.550 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:34.550 "hdgst": false, 00:15:34.550 "ddgst": false 00:15:34.550 }, 00:15:34.550 "method": "bdev_nvme_attach_controller" 00:15:34.550 },{ 00:15:34.550 "params": { 00:15:34.550 "name": "Nvme7", 00:15:34.550 "trtype": "rdma", 00:15:34.550 "traddr": "192.168.100.8", 00:15:34.550 "adrfam": "ipv4", 00:15:34.550 "trsvcid": "4420", 00:15:34.550 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:34.550 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:34.550 "hdgst": false, 00:15:34.550 "ddgst": false 00:15:34.550 }, 00:15:34.550 "method": "bdev_nvme_attach_controller" 00:15:34.550 },{ 00:15:34.550 "params": { 00:15:34.550 "name": "Nvme8", 00:15:34.550 "trtype": "rdma", 00:15:34.550 "traddr": "192.168.100.8", 00:15:34.550 "adrfam": "ipv4", 00:15:34.550 "trsvcid": "4420", 00:15:34.550 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:34.550 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:34.550 "hdgst": false, 00:15:34.550 "ddgst": false 00:15:34.550 }, 00:15:34.550 "method": "bdev_nvme_attach_controller" 00:15:34.550 },{ 00:15:34.550 "params": { 00:15:34.550 "name": "Nvme9", 00:15:34.550 "trtype": "rdma", 00:15:34.550 "traddr": "192.168.100.8", 00:15:34.550 "adrfam": "ipv4", 00:15:34.550 "trsvcid": "4420", 00:15:34.550 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:34.550 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:34.550 "hdgst": false, 00:15:34.550 "ddgst": false 00:15:34.550 }, 00:15:34.550 "method": "bdev_nvme_attach_controller" 00:15:34.550 },{ 00:15:34.550 "params": { 00:15:34.550 "name": "Nvme10", 00:15:34.550 "trtype": "rdma", 00:15:34.550 "traddr": "192.168.100.8", 00:15:34.550 "adrfam": "ipv4", 00:15:34.550 "trsvcid": "4420", 00:15:34.550 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:34.550 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:34.550 "hdgst": false, 00:15:34.550 "ddgst": false 00:15:34.550 }, 00:15:34.550 "method": "bdev_nvme_attach_controller" 00:15:34.550 }' 00:15:34.550 [2024-11-26 21:12:21.808545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.550 [2024-11-26 21:12:21.847000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.488 Running I/O for 10 seconds... 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.488 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:35.747 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.747 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=19 00:15:35.747 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 19 -ge 100 ']' 00:15:35.747 21:12:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=171 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 171 -ge 100 ']' 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3900837 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3900837 ']' 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3900837 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3900837 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3900837' 00:15:36.006 killing process with pid 3900837 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3900837 00:15:36.006 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3900837 00:15:36.265 Received shutdown signal, test time was about 0.774653 seconds 00:15:36.265 00:15:36.265 Latency(us) 00:15:36.265 [2024-11-26T20:12:23.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.265 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:36.265 Verification LBA range: start 0x0 length 0x400 00:15:36.265 Nvme1n1 : 0.76 389.42 24.34 0.00 0.00 161445.75 6092.42 203501.42 00:15:36.265 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:36.265 Verification LBA range: start 0x0 length 0x400 00:15:36.265 Nvme2n1 : 0.76 399.24 24.95 0.00 0.00 155091.49 7475.96 188743.68 00:15:36.265 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:36.265 Verification LBA range: start 0x0 length 0x400 00:15:36.265 Nvme3n1 : 0.76 419.63 26.23 0.00 0.00 144271.36 6650.69 141363.58 00:15:36.265 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:36.265 Verification LBA range: start 0x0 length 0x400 00:15:36.265 Nvme4n1 : 0.76 421.62 26.35 0.00 0.00 140817.30 4369.07 133596.35 00:15:36.265 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:36.265 Verification LBA range: start 0x0 length 0x400 00:15:36.265 Nvme5n1 : 0.77 418.18 26.14 0.00 0.00 139619.59 8738.13 121945.51 00:15:36.265 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:36.265 Verification LBA range: start 0x0 length 0x400 00:15:36.265 Nvme6n1 : 0.77 417.49 26.09 0.00 0.00 136743.14 9369.22 111848.11 00:15:36.265 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:36.265 Verification LBA range: start 0x0 length 0x400 00:15:36.265 Nvme7n1 : 0.77 416.91 26.06 0.00 0.00 133589.83 9611.95 104857.60 00:15:36.265 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:36.265 Verification LBA range: start 0x0 length 0x400 00:15:36.265 Nvme8n1 : 0.77 416.24 26.01 0.00 0.00 131336.65 10000.31 95148.56 00:15:36.265 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:36.265 Verification LBA range: start 0x0 length 0x400 00:15:36.265 Nvme9n1 : 0.77 415.45 25.97 0.00 0.00 129335.22 10728.49 92430.03 00:15:36.265 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:36.265 Verification LBA range: start 0x0 length 0x400 00:15:36.265 Nvme10n1 : 0.77 330.73 20.67 0.00 0.00 158617.55 2742.80 206608.31 00:15:36.265 [2024-11-26T20:12:23.701Z] =================================================================================================================== 00:15:36.265 [2024-11-26T20:12:23.701Z] Total : 4044.91 252.81 0.00 0.00 142560.25 2742.80 206608.31 00:15:36.524 21:12:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:15:37.463 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3900536 00:15:37.463 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:15:37.463 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:15:37.463 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:37.463 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:37.463 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:15:37.463 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:37.463 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:15:37.463 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:37.463 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:37.464 rmmod nvme_rdma 00:15:37.464 rmmod nvme_fabrics 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3900536 ']' 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3900536 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3900536 ']' 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3900536 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3900536 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3900536' 00:15:37.464 killing process with pid 3900536 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3900536 00:15:37.464 21:12:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3900536 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:38.034 00:15:38.034 real 0m4.737s 00:15:38.034 user 0m19.041s 00:15:38.034 sys 0m0.977s 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:15:38.034 ************************************ 00:15:38.034 END TEST nvmf_shutdown_tc2 00:15:38.034 ************************************ 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:38.034 ************************************ 00:15:38.034 START TEST nvmf_shutdown_tc3 00:15:38.034 ************************************ 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.034 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:38.035 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:38.035 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:38.035 Found net devices under 0000:18:00.0: mlx_0_0 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:38.035 Found net devices under 0000:18:00.1: mlx_0_1 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:38.035 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:38.036 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:38.036 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:38.036 altname enp24s0f0np0 00:15:38.036 altname ens785f0np0 00:15:38.036 inet 192.168.100.8/24 scope global mlx_0_0 00:15:38.036 valid_lft forever preferred_lft forever 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:38.036 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:38.036 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:38.036 altname enp24s0f1np1 00:15:38.036 altname ens785f1np1 00:15:38.036 inet 192.168.100.9/24 scope global mlx_0_1 00:15:38.036 valid_lft forever preferred_lft forever 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:38.036 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:38.358 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:38.359 192.168.100.9' 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:38.359 192.168.100.9' 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:38.359 192.168.100.9' 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3901522 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3901522 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3901522 ']' 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.359 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:38.359 [2024-11-26 21:12:25.609914] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:15:38.359 [2024-11-26 21:12:25.609960] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.359 [2024-11-26 21:12:25.670082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.359 [2024-11-26 21:12:25.711975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.359 [2024-11-26 21:12:25.712006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.359 [2024-11-26 21:12:25.712012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.359 [2024-11-26 21:12:25.712017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.359 [2024-11-26 21:12:25.712022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.359 [2024-11-26 21:12:25.713501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.359 [2024-11-26 21:12:25.713595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.359 [2024-11-26 21:12:25.713708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.359 [2024-11-26 21:12:25.713709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:38.640 [2024-11-26 21:12:25.868214] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9b6830/0x9bad20) succeed. 00:15:38.640 [2024-11-26 21:12:25.876317] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9b7ec0/0x9fc3c0) succeed. 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:38.640 21:12:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:38.640 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:38.640 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:38.640 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:38.640 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:38.640 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:38.640 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:38.640 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.641 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:38.641 Malloc1 00:15:38.908 [2024-11-26 21:12:26.090332] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:38.908 Malloc2 00:15:38.908 Malloc3 00:15:38.908 Malloc4 00:15:38.908 Malloc5 00:15:38.908 Malloc6 00:15:38.908 Malloc7 00:15:39.178 Malloc8 00:15:39.178 Malloc9 00:15:39.178 Malloc10 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3901802 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3901802 /var/tmp/bdevperf.sock 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3901802 ']' 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:39.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:39.178 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:39.178 { 00:15:39.178 "params": { 00:15:39.178 "name": "Nvme$subsystem", 00:15:39.178 "trtype": "$TEST_TRANSPORT", 00:15:39.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.178 "adrfam": "ipv4", 00:15:39.178 "trsvcid": "$NVMF_PORT", 00:15:39.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.179 "hdgst": ${hdgst:-false}, 00:15:39.179 "ddgst": ${ddgst:-false} 00:15:39.179 }, 00:15:39.179 "method": "bdev_nvme_attach_controller" 00:15:39.179 } 00:15:39.179 EOF 00:15:39.179 )") 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:39.179 { 00:15:39.179 "params": { 00:15:39.179 "name": "Nvme$subsystem", 00:15:39.179 "trtype": "$TEST_TRANSPORT", 00:15:39.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.179 "adrfam": "ipv4", 00:15:39.179 "trsvcid": "$NVMF_PORT", 00:15:39.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.179 "hdgst": ${hdgst:-false}, 00:15:39.179 "ddgst": ${ddgst:-false} 00:15:39.179 }, 00:15:39.179 "method": "bdev_nvme_attach_controller" 00:15:39.179 } 00:15:39.179 EOF 00:15:39.179 )") 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:39.179 { 00:15:39.179 "params": { 00:15:39.179 "name": "Nvme$subsystem", 00:15:39.179 "trtype": "$TEST_TRANSPORT", 00:15:39.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.179 "adrfam": "ipv4", 00:15:39.179 "trsvcid": "$NVMF_PORT", 00:15:39.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.179 "hdgst": ${hdgst:-false}, 00:15:39.179 "ddgst": ${ddgst:-false} 00:15:39.179 }, 00:15:39.179 "method": "bdev_nvme_attach_controller" 00:15:39.179 } 00:15:39.179 EOF 00:15:39.179 )") 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:39.179 { 00:15:39.179 "params": { 00:15:39.179 "name": "Nvme$subsystem", 00:15:39.179 "trtype": "$TEST_TRANSPORT", 00:15:39.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.179 "adrfam": "ipv4", 00:15:39.179 "trsvcid": "$NVMF_PORT", 00:15:39.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.179 "hdgst": ${hdgst:-false}, 00:15:39.179 "ddgst": ${ddgst:-false} 00:15:39.179 }, 00:15:39.179 "method": "bdev_nvme_attach_controller" 00:15:39.179 } 00:15:39.179 EOF 00:15:39.179 )") 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:39.179 { 00:15:39.179 "params": { 00:15:39.179 "name": "Nvme$subsystem", 00:15:39.179 "trtype": "$TEST_TRANSPORT", 00:15:39.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.179 "adrfam": "ipv4", 00:15:39.179 "trsvcid": "$NVMF_PORT", 00:15:39.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.179 "hdgst": ${hdgst:-false}, 00:15:39.179 "ddgst": ${ddgst:-false} 00:15:39.179 }, 00:15:39.179 "method": "bdev_nvme_attach_controller" 00:15:39.179 } 00:15:39.179 EOF 00:15:39.179 )") 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:39.179 { 00:15:39.179 "params": { 00:15:39.179 "name": "Nvme$subsystem", 00:15:39.179 "trtype": "$TEST_TRANSPORT", 00:15:39.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.179 "adrfam": "ipv4", 00:15:39.179 "trsvcid": "$NVMF_PORT", 00:15:39.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.179 "hdgst": ${hdgst:-false}, 00:15:39.179 "ddgst": ${ddgst:-false} 00:15:39.179 }, 00:15:39.179 "method": "bdev_nvme_attach_controller" 00:15:39.179 } 00:15:39.179 EOF 00:15:39.179 )") 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:39.179 { 00:15:39.179 "params": { 00:15:39.179 "name": "Nvme$subsystem", 00:15:39.179 "trtype": "$TEST_TRANSPORT", 00:15:39.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.179 "adrfam": "ipv4", 00:15:39.179 "trsvcid": "$NVMF_PORT", 00:15:39.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.179 "hdgst": ${hdgst:-false}, 00:15:39.179 "ddgst": ${ddgst:-false} 00:15:39.179 }, 00:15:39.179 "method": "bdev_nvme_attach_controller" 00:15:39.179 } 00:15:39.179 EOF 00:15:39.179 )") 00:15:39.179 [2024-11-26 21:12:26.559400] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:15:39.179 [2024-11-26 21:12:26.559448] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901802 ] 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:39.179 { 00:15:39.179 "params": { 00:15:39.179 "name": "Nvme$subsystem", 00:15:39.179 "trtype": "$TEST_TRANSPORT", 00:15:39.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.179 "adrfam": "ipv4", 00:15:39.179 "trsvcid": "$NVMF_PORT", 00:15:39.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.179 "hdgst": ${hdgst:-false}, 00:15:39.179 "ddgst": ${ddgst:-false} 00:15:39.179 }, 00:15:39.179 "method": "bdev_nvme_attach_controller" 00:15:39.179 } 00:15:39.179 EOF 00:15:39.179 )") 00:15:39.179 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:39.180 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:39.180 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:39.180 { 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme$subsystem", 00:15:39.180 "trtype": "$TEST_TRANSPORT", 00:15:39.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "$NVMF_PORT", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.180 "hdgst": ${hdgst:-false}, 00:15:39.180 "ddgst": ${ddgst:-false} 00:15:39.180 }, 00:15:39.180 "method": "bdev_nvme_attach_controller" 00:15:39.180 } 00:15:39.180 EOF 00:15:39.180 )") 00:15:39.180 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:39.180 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:39.180 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:39.180 { 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme$subsystem", 00:15:39.180 "trtype": "$TEST_TRANSPORT", 00:15:39.180 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "$NVMF_PORT", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.180 "hdgst": ${hdgst:-false}, 00:15:39.180 "ddgst": ${ddgst:-false} 00:15:39.180 }, 00:15:39.180 "method": "bdev_nvme_attach_controller" 00:15:39.180 } 00:15:39.180 EOF 00:15:39.180 )") 00:15:39.180 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:15:39.180 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:15:39.180 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:15:39.180 21:12:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme1", 00:15:39.180 "trtype": "rdma", 00:15:39.180 "traddr": "192.168.100.8", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "4420", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.180 "hdgst": false, 00:15:39.180 "ddgst": false 00:15:39.180 }, 00:15:39.180 "method": "bdev_nvme_attach_controller" 00:15:39.180 },{ 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme2", 00:15:39.180 "trtype": "rdma", 00:15:39.180 "traddr": "192.168.100.8", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "4420", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:15:39.180 "hdgst": false, 00:15:39.180 "ddgst": false 00:15:39.180 }, 00:15:39.180 "method": "bdev_nvme_attach_controller" 00:15:39.180 },{ 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme3", 00:15:39.180 "trtype": "rdma", 00:15:39.180 "traddr": "192.168.100.8", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "4420", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:15:39.180 "hdgst": false, 00:15:39.180 "ddgst": false 00:15:39.180 }, 00:15:39.180 "method": "bdev_nvme_attach_controller" 00:15:39.180 },{ 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme4", 00:15:39.180 "trtype": "rdma", 00:15:39.180 "traddr": "192.168.100.8", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "4420", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:15:39.180 "hdgst": false, 00:15:39.180 "ddgst": false 00:15:39.180 }, 00:15:39.180 "method": "bdev_nvme_attach_controller" 00:15:39.180 },{ 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme5", 00:15:39.180 "trtype": "rdma", 00:15:39.180 "traddr": "192.168.100.8", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "4420", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:15:39.180 "hdgst": false, 00:15:39.180 "ddgst": false 00:15:39.180 }, 00:15:39.180 "method": "bdev_nvme_attach_controller" 00:15:39.180 },{ 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme6", 00:15:39.180 "trtype": "rdma", 00:15:39.180 "traddr": "192.168.100.8", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "4420", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:15:39.180 "hdgst": false, 00:15:39.180 "ddgst": false 00:15:39.180 }, 00:15:39.180 "method": "bdev_nvme_attach_controller" 00:15:39.180 },{ 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme7", 00:15:39.180 "trtype": "rdma", 00:15:39.180 "traddr": "192.168.100.8", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "4420", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:15:39.180 "hdgst": false, 00:15:39.180 "ddgst": false 00:15:39.180 }, 00:15:39.180 "method": "bdev_nvme_attach_controller" 00:15:39.180 },{ 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme8", 00:15:39.180 "trtype": "rdma", 00:15:39.180 "traddr": "192.168.100.8", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "4420", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:15:39.180 "hdgst": false, 00:15:39.180 "ddgst": false 00:15:39.180 }, 00:15:39.180 "method": "bdev_nvme_attach_controller" 00:15:39.180 },{ 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme9", 00:15:39.180 "trtype": "rdma", 00:15:39.180 "traddr": "192.168.100.8", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "4420", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:15:39.180 "hdgst": false, 00:15:39.180 "ddgst": false 00:15:39.180 }, 00:15:39.180 "method": "bdev_nvme_attach_controller" 00:15:39.180 },{ 00:15:39.180 "params": { 00:15:39.180 "name": "Nvme10", 00:15:39.180 "trtype": "rdma", 00:15:39.180 "traddr": "192.168.100.8", 00:15:39.180 "adrfam": "ipv4", 00:15:39.180 "trsvcid": "4420", 00:15:39.180 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:15:39.180 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:15:39.181 "hdgst": false, 00:15:39.181 "ddgst": false 00:15:39.181 }, 00:15:39.181 "method": "bdev_nvme_attach_controller" 00:15:39.181 }' 00:15:39.441 [2024-11-26 21:12:26.619830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.442 [2024-11-26 21:12:26.658223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.378 Running I/O for 10 seconds... 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:15:40.378 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:15:40.379 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:15:40.379 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.379 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:40.379 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.379 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:15:40.379 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:15:40.379 21:12:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:15:40.638 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:15:40.638 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:15:40.638 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:15:40.638 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:15:40.638 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.638 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3901522 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3901522 ']' 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3901522 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3901522 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3901522' 00:15:40.897 killing process with pid 3901522 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3901522 00:15:40.897 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3901522 00:15:41.415 2538.00 IOPS, 158.62 MiB/s [2024-11-26T20:12:28.851Z] 21:12:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:15:41.985 [2024-11-26 21:12:29.251878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188cfd00 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.251947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.251993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188bfc80 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188afc00 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001889fb80 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001888fb00 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001887fa80 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001886fa00 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001885f980 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001884f900 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001883f880 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001882f800 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001881f780 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001880f700 len:0x10000 key:0x181e00 00:15:41.985 [2024-11-26 21:12:29.252579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.985 [2024-11-26 21:12:29.252606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bf0000 len:0x10000 key:0x184200 00:15:41.985 [2024-11-26 21:12:29.252628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bdff80 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bcff00 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bbfe80 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bafe00 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b9fd80 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b8fd00 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b7fc80 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b6fc00 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b5fb80 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b4fb00 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b3fa80 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b2fa00 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b1f980 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b0f900 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.252987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.252998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aff880 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aef800 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018adf780 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018acf700 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018abf680 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aaf600 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a9f580 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a8f500 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a7f480 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a6f400 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a5f380 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a4f300 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a3f280 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a2f200 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a1f180 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a0f100 len:0x10000 key:0x184200 00:15:41.986 [2024-11-26 21:12:29.253293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018df0000 len:0x10000 key:0x184500 00:15:41.986 [2024-11-26 21:12:29.253311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ddff80 len:0x10000 key:0x184500 00:15:41.986 [2024-11-26 21:12:29.253330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dcff00 len:0x10000 key:0x184500 00:15:41.986 [2024-11-26 21:12:29.253348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dbfe80 len:0x10000 key:0x184500 00:15:41.986 [2024-11-26 21:12:29.253366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dafe00 len:0x10000 key:0x184500 00:15:41.986 [2024-11-26 21:12:29.253384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.986 [2024-11-26 21:12:29.253394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d9fd80 len:0x10000 key:0x184500 00:15:41.986 [2024-11-26 21:12:29.253402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d8fd00 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d7fc80 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d6fc00 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d5fb80 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d4fb00 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d3fa80 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d2fa00 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d1f980 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d0f900 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cff880 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cef800 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cdf780 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ccf700 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.253650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.253660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188dfd80 len:0x10000 key:0x181e00 00:15:41.987 [2024-11-26 21:12:29.253668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edfd80 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfd00 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfc80 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafc00 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fb80 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fb00 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fa80 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fa00 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5f980 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4f900 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3f880 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f800 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f780 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f700 len:0x10000 key:0x183f00 00:15:41.987 [2024-11-26 21:12:29.256463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cbf680 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.256482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018caf600 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.256500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c9f580 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.256519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c8f500 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.256537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c7f480 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.256556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c6f400 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.256574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c5f380 len:0x10000 key:0x184500 00:15:41.987 [2024-11-26 21:12:29.256593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.987 [2024-11-26 21:12:29.256603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c4f300 len:0x10000 key:0x184500 00:15:41.988 [2024-11-26 21:12:29.256611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c3f280 len:0x10000 key:0x184500 00:15:41.988 [2024-11-26 21:12:29.256630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c2f200 len:0x10000 key:0x184500 00:15:41.988 [2024-11-26 21:12:29.256650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c1f180 len:0x10000 key:0x184500 00:15:41.988 [2024-11-26 21:12:29.256669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c0f100 len:0x10000 key:0x184500 00:15:41.988 [2024-11-26 21:12:29.256687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191f0000 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191dff80 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191cff00 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191bfe80 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191afe00 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001919fd80 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001918fd00 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001917fc80 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001916fc00 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001915fb80 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001914fb00 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001913fa80 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001912fa00 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001911f980 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001910f900 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190ff880 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.256984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.256995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190ef800 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190df780 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cf700 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bf680 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190af600 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909f580 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908f500 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907f480 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906f400 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905f380 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904f300 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903f280 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f200 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f180 len:0x10000 key:0x183800 00:15:41.988 [2024-11-26 21:12:29.257245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.988 [2024-11-26 21:12:29.257258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f100 len:0x10000 key:0x183800 00:15:41.989 [2024-11-26 21:12:29.257268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.257279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193f0000 len:0x10000 key:0x184600 00:15:41.989 [2024-11-26 21:12:29.257287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.257297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193dff80 len:0x10000 key:0x184600 00:15:41.989 [2024-11-26 21:12:29.257305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.257315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193cff00 len:0x10000 key:0x184600 00:15:41.989 [2024-11-26 21:12:29.257324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.257334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193bfe80 len:0x10000 key:0x184600 00:15:41.989 [2024-11-26 21:12:29.257342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.257352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193afe00 len:0x10000 key:0x184600 00:15:41.989 [2024-11-26 21:12:29.257361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.257371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001939fd80 len:0x10000 key:0x184600 00:15:41.989 [2024-11-26 21:12:29.257380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.257390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eefe00 len:0x10000 key:0x183f00 00:15:41.989 [2024-11-26 21:12:29.257398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ee40e000 sqhd:7250 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.259934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.259951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:1180690 sqhd:f890 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.259961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.259969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:1180690 sqhd:f890 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.259978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.259986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:1180690 sqhd:f890 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.259995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.260003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:1180690 sqhd:f890 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.262160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:41.989 [2024-11-26 21:12:29.262174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:15:41.989 [2024-11-26 21:12:29.262184] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:15:41.989 [2024-11-26 21:12:29.262201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.262210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.262219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.262227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.262235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.262243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.262260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.262269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.264473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:41.989 [2024-11-26 21:12:29.264486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:15:41.989 [2024-11-26 21:12:29.264494] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:15:41.989 [2024-11-26 21:12:29.264510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.264519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.264528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.264537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.264545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.264554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.264562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.264570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.266445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:41.989 [2024-11-26 21:12:29.266480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:15:41.989 [2024-11-26 21:12:29.266501] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:15:41.989 [2024-11-26 21:12:29.266539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.266569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.266592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.266614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.266636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.266657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.266680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.266701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.268497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:41.989 [2024-11-26 21:12:29.268531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:15:41.989 [2024-11-26 21:12:29.268552] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:15:41.989 [2024-11-26 21:12:29.268586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.268608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.268632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.268652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.268675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.268696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.268718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.268739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.271005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:41.989 [2024-11-26 21:12:29.271036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:15:41.989 [2024-11-26 21:12:29.271057] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:15:41.989 [2024-11-26 21:12:29.271094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.271116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.271139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.271160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.271190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.271211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.989 [2024-11-26 21:12:29.271234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.989 [2024-11-26 21:12:29.271319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.273220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:41.990 [2024-11-26 21:12:29.273234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:41.990 [2024-11-26 21:12:29.273242] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.273259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.273268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.273276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.273284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.273292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.273300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.273309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.273316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.275345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:41.990 [2024-11-26 21:12:29.275378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:15:41.990 [2024-11-26 21:12:29.275398] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.275436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.275458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.275481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.275502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.275524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.275545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.275567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.275595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.277691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:41.990 [2024-11-26 21:12:29.277724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:15:41.990 [2024-11-26 21:12:29.277743] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.277784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.277806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.277829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.277850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.277872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.277893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.277916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.277937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.279977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:41.990 [2024-11-26 21:12:29.280009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:15:41.990 [2024-11-26 21:12:29.280029] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.280063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.280085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.280108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.280128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.280151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.280172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.280193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:15:41.990 [2024-11-26 21:12:29.280214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32694 cdw0:1 sqhd:4990 p:0 m:0 dnr:0 00:15:41.990 [2024-11-26 21:12:29.298811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:41.990 [2024-11-26 21:12:29.298859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:15:41.990 [2024-11-26 21:12:29.298890] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.306379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:41.990 [2024-11-26 21:12:29.306405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:15:41.990 [2024-11-26 21:12:29.306413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:15:41.990 [2024-11-26 21:12:29.306468] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.306478] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.306487] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.306499] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.306518] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.306526] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.306534] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:15:41.990 [2024-11-26 21:12:29.306608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:15:41.990 [2024-11-26 21:12:29.306616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:15:41.990 [2024-11-26 21:12:29.306623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:15:41.990 [2024-11-26 21:12:29.306634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:15:41.990 [2024-11-26 21:12:29.308746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:15:41.990 task offset: 32768 on job bdev=Nvme6n1 fails 00:15:41.990 00:15:41.990 Latency(us) 00:15:41.990 [2024-11-26T20:12:29.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.990 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.990 Job: Nvme1n1 ended in about 1.80 seconds with error 00:15:41.990 Verification LBA range: start 0x0 length 0x400 00:15:41.990 Nvme1n1 : 1.80 142.40 8.90 35.60 0.00 357853.26 35535.08 1056343.23 00:15:41.990 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.990 Job: Nvme2n1 ended in about 1.80 seconds with error 00:15:41.990 Verification LBA range: start 0x0 length 0x400 00:15:41.990 Nvme2n1 : 1.80 142.34 8.90 35.59 0.00 355133.06 21456.97 1062557.01 00:15:41.990 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.990 Job: Nvme3n1 ended in about 1.80 seconds with error 00:15:41.991 Verification LBA range: start 0x0 length 0x400 00:15:41.991 Nvme3n1 : 1.80 146.18 9.14 35.57 0.00 344875.28 4466.16 1062557.01 00:15:41.991 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.991 Job: Nvme4n1 ended in about 1.80 seconds with error 00:15:41.991 Verification LBA range: start 0x0 length 0x400 00:15:41.991 Nvme4n1 : 1.80 160.01 10.00 35.56 0.00 317898.15 6456.51 1062557.01 00:15:41.991 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.991 Job: Nvme5n1 ended in about 1.80 seconds with error 00:15:41.991 Verification LBA range: start 0x0 length 0x400 00:15:41.991 Nvme5n1 : 1.80 142.73 8.92 35.54 0.00 345906.37 14466.47 1062557.01 00:15:41.991 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.991 Job: Nvme6n1 ended in about 1.80 seconds with error 00:15:41.991 Verification LBA range: start 0x0 length 0x400 00:15:41.991 Nvme6n1 : 1.80 142.11 8.88 35.53 0.00 343718.12 34564.17 1056343.23 00:15:41.991 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.991 Job: Nvme7n1 ended in about 1.80 seconds with error 00:15:41.991 Verification LBA range: start 0x0 length 0x400 00:15:41.991 Nvme7n1 : 1.80 142.06 8.88 35.51 0.00 340878.83 48739.37 1056343.23 00:15:41.991 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.991 Job: Nvme8n1 ended in about 1.80 seconds with error 00:15:41.991 Verification LBA range: start 0x0 length 0x400 00:15:41.991 Nvme8n1 : 1.80 143.11 8.94 35.50 0.00 335763.23 13301.38 1056343.23 00:15:41.991 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.991 Job: Nvme9n1 ended in about 1.80 seconds with error 00:15:41.991 Verification LBA range: start 0x0 length 0x400 00:15:41.991 Nvme9n1 : 1.80 141.95 8.87 35.49 0.00 336153.87 40777.96 1112267.28 00:15:41.991 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:41.991 Job: Nvme10n1 ended in about 1.80 seconds with error 00:15:41.991 Verification LBA range: start 0x0 length 0x400 00:15:41.991 Nvme10n1 : 1.80 106.41 6.65 35.47 0.00 416590.13 49516.09 1099839.72 00:15:41.991 [2024-11-26T20:12:29.427Z] =================================================================================================================== 00:15:41.991 [2024-11-26T20:12:29.427Z] Total : 1409.30 88.08 355.36 0.00 347787.61 4466.16 1112267.28 00:15:41.991 [2024-11-26 21:12:29.328037] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:41.991 [2024-11-26 21:12:29.328057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:15:41.991 [2024-11-26 21:12:29.328069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:15:41.991 [2024-11-26 21:12:29.336949] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:41.991 [2024-11-26 21:12:29.336967] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:41.991 [2024-11-26 21:12:29.336974] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:15:41.991 [2024-11-26 21:12:29.337057] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:41.991 [2024-11-26 21:12:29.337065] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:41.991 [2024-11-26 21:12:29.337069] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168e5300 00:15:41.991 [2024-11-26 21:12:29.337150] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:41.991 [2024-11-26 21:12:29.337157] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:41.991 [2024-11-26 21:12:29.337162] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168d9c80 00:15:41.991 [2024-11-26 21:12:29.341248] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:41.991 [2024-11-26 21:12:29.341360] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:41.991 [2024-11-26 21:12:29.341389] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168d2900 00:15:41.991 [2024-11-26 21:12:29.341497] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:41.991 [2024-11-26 21:12:29.341521] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:41.991 [2024-11-26 21:12:29.341546] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168c6340 00:15:41.991 [2024-11-26 21:12:29.341693] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:41.991 [2024-11-26 21:12:29.341717] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:41.991 [2024-11-26 21:12:29.341734] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168c5040 00:15:41.991 [2024-11-26 21:12:29.341842] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:41.991 [2024-11-26 21:12:29.341866] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:41.991 [2024-11-26 21:12:29.341883] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168a8500 00:15:41.991 [2024-11-26 21:12:29.342548] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:41.991 [2024-11-26 21:12:29.342558] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:41.991 [2024-11-26 21:12:29.342562] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001689b1c0 00:15:41.991 [2024-11-26 21:12:29.342664] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:41.991 [2024-11-26 21:12:29.342671] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:41.991 [2024-11-26 21:12:29.342676] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001688e080 00:15:41.991 [2024-11-26 21:12:29.342767] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:15:41.991 [2024-11-26 21:12:29.342774] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:15:41.991 [2024-11-26 21:12:29.342779] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168bf1c0 00:15:42.249 21:12:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3901802 00:15:42.249 21:12:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:15:42.249 21:12:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3901802 00:15:42.249 21:12:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:15:42.249 21:12:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.249 21:12:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:15:42.249 21:12:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:42.250 21:12:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3901802 00:15:43.189 [2024-11-26 21:12:30.341348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:43.189 [2024-11-26 21:12:30.341378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:43.189 [2024-11-26 21:12:30.342689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:43.189 [2024-11-26 21:12:30.342724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:15:43.189 [2024-11-26 21:12:30.344261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:43.189 [2024-11-26 21:12:30.344276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:15:43.189 [2024-11-26 21:12:30.345677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:43.189 [2024-11-26 21:12:30.345711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:15:43.189 [2024-11-26 21:12:30.347181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:43.189 [2024-11-26 21:12:30.347213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:15:43.189 [2024-11-26 21:12:30.348857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:43.189 [2024-11-26 21:12:30.348870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:15:43.189 [2024-11-26 21:12:30.350406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:43.189 [2024-11-26 21:12:30.350438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:15:43.189 [2024-11-26 21:12:30.350458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:15:43.189 [2024-11-26 21:12:30.350477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:15:43.189 [2024-11-26 21:12:30.350499] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:15:43.189 [2024-11-26 21:12:30.350525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:15:43.189 [2024-11-26 21:12:30.350561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:15:43.189 [2024-11-26 21:12:30.350582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:15:43.189 [2024-11-26 21:12:30.350600] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:15:43.189 [2024-11-26 21:12:30.350621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:15:43.189 [2024-11-26 21:12:30.350647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:15:43.189 [2024-11-26 21:12:30.350665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:15:43.189 [2024-11-26 21:12:30.350685] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:15:43.189 [2024-11-26 21:12:30.350705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:15:43.189 [2024-11-26 21:12:30.352269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:43.189 [2024-11-26 21:12:30.352284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:15:43.189 [2024-11-26 21:12:30.353595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:43.189 [2024-11-26 21:12:30.353626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:15:43.189 [2024-11-26 21:12:30.354900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:43.189 [2024-11-26 21:12:30.354931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:15:43.189 [2024-11-26 21:12:30.354950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:15:43.189 [2024-11-26 21:12:30.354976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:15:43.189 [2024-11-26 21:12:30.354997] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:15:43.189 [2024-11-26 21:12:30.355019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:15:43.189 [2024-11-26 21:12:30.355046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:15:43.189 [2024-11-26 21:12:30.355066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:15:43.189 [2024-11-26 21:12:30.355084] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:15:43.189 [2024-11-26 21:12:30.355106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:15:43.189 [2024-11-26 21:12:30.355131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:15:43.189 [2024-11-26 21:12:30.355150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:15:43.189 [2024-11-26 21:12:30.355169] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:15:43.189 [2024-11-26 21:12:30.355190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:15:43.189 [2024-11-26 21:12:30.355216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:15:43.189 [2024-11-26 21:12:30.355235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:15:43.189 [2024-11-26 21:12:30.355263] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:15:43.189 [2024-11-26 21:12:30.355285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:15:43.189 [2024-11-26 21:12:30.355519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:15:43.189 [2024-11-26 21:12:30.355544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:15:43.189 [2024-11-26 21:12:30.355563] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:15:43.189 [2024-11-26 21:12:30.355587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:15:43.189 [2024-11-26 21:12:30.355598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:15:43.189 [2024-11-26 21:12:30.355605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:15:43.189 [2024-11-26 21:12:30.355613] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:15:43.189 [2024-11-26 21:12:30.355621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:15:43.189 [2024-11-26 21:12:30.355630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:15:43.189 [2024-11-26 21:12:30.355637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:15:43.189 [2024-11-26 21:12:30.355644] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:15:43.189 [2024-11-26 21:12:30.355652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:15:43.189 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:15:43.189 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:43.189 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:15:43.189 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:15:43.189 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:15:43.189 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:43.189 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:15:43.189 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:15:43.189 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:43.190 rmmod nvme_rdma 00:15:43.190 rmmod nvme_fabrics 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3901522 ']' 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3901522 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3901522 ']' 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3901522 00:15:43.190 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3901522) - No such process 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3901522 is not found' 00:15:43.190 Process with pid 3901522 is not found 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:43.190 00:15:43.190 real 0m5.252s 00:15:43.190 user 0m15.102s 00:15:43.190 sys 0m1.099s 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:15:43.190 ************************************ 00:15:43.190 END TEST nvmf_shutdown_tc3 00:15:43.190 ************************************ 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.190 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:43.451 ************************************ 00:15:43.451 START TEST nvmf_shutdown_tc4 00:15:43.451 ************************************ 00:15:43.451 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:15:43.451 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:15:43.451 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:15:43.451 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:43.451 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.451 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:43.451 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:43.451 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:43.451 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.451 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:43.451 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:43.452 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:43.452 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:43.452 Found net devices under 0000:18:00.0: mlx_0_0 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:43.452 Found net devices under 0000:18:00.1: mlx_0_1 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:43.452 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:43.453 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:43.453 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:43.453 altname enp24s0f0np0 00:15:43.453 altname ens785f0np0 00:15:43.453 inet 192.168.100.8/24 scope global mlx_0_0 00:15:43.453 valid_lft forever preferred_lft forever 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:43.453 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:43.453 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:43.453 altname enp24s0f1np1 00:15:43.453 altname ens785f1np1 00:15:43.453 inet 192.168.100.9/24 scope global mlx_0_1 00:15:43.453 valid_lft forever preferred_lft forever 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:43.453 192.168.100.9' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:43.453 192.168.100.9' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:43.453 192.168.100.9' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:43.453 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:43.713 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:43.713 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3902704 00:15:43.713 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3902704 00:15:43.713 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:43.713 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3902704 ']' 00:15:43.713 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.713 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.713 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.713 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.713 21:12:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:43.713 [2024-11-26 21:12:30.938610] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:15:43.713 [2024-11-26 21:12:30.938659] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.713 [2024-11-26 21:12:30.999642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:43.713 [2024-11-26 21:12:31.040520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.713 [2024-11-26 21:12:31.040554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.713 [2024-11-26 21:12:31.040560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.713 [2024-11-26 21:12:31.040565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.713 [2024-11-26 21:12:31.040570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.713 [2024-11-26 21:12:31.041911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:43.713 [2024-11-26 21:12:31.041995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:43.713 [2024-11-26 21:12:31.042115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.713 [2024-11-26 21:12:31.042117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:43.713 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.713 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:15:43.714 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.714 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:43.714 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:43.973 [2024-11-26 21:12:31.191848] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23ab830/0x23afd20) succeed. 00:15:43.973 [2024-11-26 21:12:31.199976] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23acec0/0x23f13c0) succeed. 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:43.973 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.974 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:43.974 Malloc1 00:15:43.974 [2024-11-26 21:12:31.406306] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:44.233 Malloc2 00:15:44.233 Malloc3 00:15:44.233 Malloc4 00:15:44.233 Malloc5 00:15:44.233 Malloc6 00:15:44.233 Malloc7 00:15:44.493 Malloc8 00:15:44.493 Malloc9 00:15:44.493 Malloc10 00:15:44.493 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.493 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:15:44.493 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:44.493 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:44.493 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3903009 00:15:44.493 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:15:44.493 21:12:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:15:44.493 [2024-11-26 21:12:31.917674] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3902704 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3902704 ']' 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3902704 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3902704 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3902704' 00:15:49.770 killing process with pid 3902704 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3902704 00:15:49.770 21:12:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3902704 00:15:49.770 NVMe io qpair process completion error 00:15:49.770 NVMe io qpair process completion error 00:15:49.770 NVMe io qpair process completion error 00:15:49.770 NVMe io qpair process completion error 00:15:50.338 21:12:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:15:50.598 [2024-11-26 21:12:37.978978] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:15:50.598 [2024-11-26 21:12:37.979123] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:15:50.598 [2024-11-26 21:12:37.981421] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:15:50.598 [2024-11-26 21:12:37.984392] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Submitting Keep Alive failed 00:15:50.598 NVMe io qpair process completion error 00:15:50.598 NVMe io qpair process completion error 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.598 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 NVMe io qpair process completion error 00:15:50.599 NVMe io qpair process completion error 00:15:50.599 NVMe io qpair process completion error 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 NVMe io qpair process completion error 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.599 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.600 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:50.860 Write completed with error (sct=0, sc=8) 00:15:51.120 21:12:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3903009 00:15:51.120 21:12:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:15:51.120 21:12:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3903009 00:15:51.120 21:12:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:15:51.120 21:12:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.120 21:12:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:15:51.120 21:12:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:51.120 21:12:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3903009 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 [2024-11-26 21:12:38.987451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:51.692 [2024-11-26 21:12:38.987511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 [2024-11-26 21:12:38.989786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:51.692 [2024-11-26 21:12:38.989822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.692 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 [2024-11-26 21:12:38.996564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 [2024-11-26 21:12:38.996627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 [2024-11-26 21:12:39.008120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 [2024-11-26 21:12:39.008186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.693 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 [2024-11-26 21:12:39.010358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:51.694 [2024-11-26 21:12:39.010395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 [2024-11-26 21:12:39.019015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 [2024-11-26 21:12:39.019079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.694 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 [2024-11-26 21:12:39.029924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 [2024-11-26 21:12:39.029988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.695 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 [2024-11-26 21:12:39.040310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 [2024-11-26 21:12:39.040374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 [2024-11-26 21:12:39.042470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:51.696 [2024-11-26 21:12:39.042508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 Write completed with error (sct=0, sc=8) 00:15:51.696 [2024-11-26 21:12:39.080695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:15:51.696 [2024-11-26 21:12:39.080750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:15:51.696 Initializing NVMe Controllers 00:15:51.696 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:15:51.696 Controller IO queue size 128, less than required. 00:15:51.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.696 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:15:51.696 Controller IO queue size 128, less than required. 00:15:51.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.696 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:15:51.696 Controller IO queue size 128, less than required. 00:15:51.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.696 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:15:51.696 Controller IO queue size 128, less than required. 00:15:51.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.696 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:15:51.696 Controller IO queue size 128, less than required. 00:15:51.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.696 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:15:51.696 Controller IO queue size 128, less than required. 00:15:51.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.696 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:15:51.696 Controller IO queue size 128, less than required. 00:15:51.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.696 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:15:51.696 Controller IO queue size 128, less than required. 00:15:51.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.696 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:15:51.696 Controller IO queue size 128, less than required. 00:15:51.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.696 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:15:51.696 Controller IO queue size 128, less than required. 00:15:51.696 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.696 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:15:51.696 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:15:51.696 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:15:51.696 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:15:51.697 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:51.697 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:15:51.697 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:15:51.697 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:15:51.697 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:15:51.697 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:15:51.697 Initialization complete. Launching workers. 00:15:51.697 ======================================================== 00:15:51.697 Latency(us) 00:15:51.697 Device Information : IOPS MiB/s Average min max 00:15:51.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1664.06 71.50 77585.00 107.22 1205692.08 00:15:51.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1654.39 71.09 78117.02 107.36 1214312.96 00:15:51.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1674.75 71.96 89367.62 108.14 2143689.78 00:15:51.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1664.91 71.54 90005.35 107.67 2151858.57 00:15:51.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1635.39 70.27 78264.14 107.39 1199401.01 00:15:51.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1686.79 72.48 88896.85 103.50 2128218.25 00:15:51.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1654.56 71.09 90744.23 106.68 2181874.39 00:15:51.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1694.94 72.83 88660.45 104.05 2099919.81 00:15:51.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1635.39 70.27 78394.90 107.69 1215292.56 00:15:51.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1723.44 74.05 87220.36 80.82 2027339.13 00:15:51.697 ======================================================== 00:15:51.697 Total : 16688.60 717.09 84773.06 80.82 2181874.39 00:15:51.697 00:15:51.697 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.697 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:51.697 rmmod nvme_rdma 00:15:51.956 rmmod nvme_fabrics 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3902704 ']' 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3902704 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3902704 ']' 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3902704 00:15:51.956 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3902704) - No such process 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3902704 is not found' 00:15:51.956 Process with pid 3902704 is not found 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:15:51.956 00:15:51.956 real 0m8.510s 00:15:51.956 user 0m31.995s 00:15:51.956 sys 0m1.039s 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:15:51.956 ************************************ 00:15:51.956 END TEST nvmf_shutdown_tc4 00:15:51.956 ************************************ 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:15:51.956 00:15:51.956 real 0m30.661s 00:15:51.956 user 1m33.723s 00:15:51.956 sys 0m8.653s 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:15:51.956 ************************************ 00:15:51.956 END TEST nvmf_shutdown 00:15:51.956 ************************************ 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.956 ************************************ 00:15:51.956 START TEST nvmf_nsid 00:15:51.956 ************************************ 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:15:51.956 * Looking for test storage... 00:15:51.956 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:15:51.956 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:52.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.216 --rc genhtml_branch_coverage=1 00:15:52.216 --rc genhtml_function_coverage=1 00:15:52.216 --rc genhtml_legend=1 00:15:52.216 --rc geninfo_all_blocks=1 00:15:52.216 --rc geninfo_unexecuted_blocks=1 00:15:52.216 00:15:52.216 ' 00:15:52.216 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:52.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.216 --rc genhtml_branch_coverage=1 00:15:52.216 --rc genhtml_function_coverage=1 00:15:52.217 --rc genhtml_legend=1 00:15:52.217 --rc geninfo_all_blocks=1 00:15:52.217 --rc geninfo_unexecuted_blocks=1 00:15:52.217 00:15:52.217 ' 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:52.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.217 --rc genhtml_branch_coverage=1 00:15:52.217 --rc genhtml_function_coverage=1 00:15:52.217 --rc genhtml_legend=1 00:15:52.217 --rc geninfo_all_blocks=1 00:15:52.217 --rc geninfo_unexecuted_blocks=1 00:15:52.217 00:15:52.217 ' 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:52.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.217 --rc genhtml_branch_coverage=1 00:15:52.217 --rc genhtml_function_coverage=1 00:15:52.217 --rc genhtml_legend=1 00:15:52.217 --rc geninfo_all_blocks=1 00:15:52.217 --rc geninfo_unexecuted_blocks=1 00:15:52.217 00:15:52.217 ' 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.217 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:52.217 21:12:39 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.487 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:15:57.488 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:15:57.488 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:15:57.488 Found net devices under 0000:18:00.0: mlx_0_0 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:15:57.488 Found net devices under 0000:18:00.1: mlx_0_1 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:15:57.488 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:57.488 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:15:57.488 altname enp24s0f0np0 00:15:57.488 altname ens785f0np0 00:15:57.488 inet 192.168.100.8/24 scope global mlx_0_0 00:15:57.488 valid_lft forever preferred_lft forever 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:15:57.488 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:57.488 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:15:57.488 altname enp24s0f1np1 00:15:57.488 altname ens785f1np1 00:15:57.488 inet 192.168.100.9/24 scope global mlx_0_1 00:15:57.488 valid_lft forever preferred_lft forever 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:15:57.488 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:15:57.489 192.168.100.9' 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:15:57.489 192.168.100.9' 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:15:57.489 192.168.100.9' 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:15:57.489 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3907497 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3907497 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3907497 ']' 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.748 21:12:44 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:57.748 [2024-11-26 21:12:44.973717] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:15:57.748 [2024-11-26 21:12:44.973761] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.748 [2024-11-26 21:12:45.028031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.748 [2024-11-26 21:12:45.066130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.748 [2024-11-26 21:12:45.066163] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.748 [2024-11-26 21:12:45.066169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.748 [2024-11-26 21:12:45.066175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.748 [2024-11-26 21:12:45.066179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.748 [2024-11-26 21:12:45.066673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.748 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.748 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:57.748 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:57.748 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:57.748 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3907613 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:15:58.006 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=7ee3bdd3-fa9a-4a89-82b2-50f155c2054d 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=d9007ba9-7a13-4946-b1a8-1f8704c80945 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d5aa5953-14b0-4e90-a805-b8b108338857 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:58.007 null0 00:15:58.007 null1 00:15:58.007 [2024-11-26 21:12:45.239156] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:15:58.007 [2024-11-26 21:12:45.239195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907613 ] 00:15:58.007 null2 00:15:58.007 [2024-11-26 21:12:45.261388] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x910f50/0x9215d0) succeed. 00:15:58.007 [2024-11-26 21:12:45.269394] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x912400/0x9a1640) succeed. 00:15:58.007 [2024-11-26 21:12:45.296165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.007 [2024-11-26 21:12:45.317327] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:58.007 [2024-11-26 21:12:45.334756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3907613 /var/tmp/tgt2.sock 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3907613 ']' 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:15:58.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.007 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:15:58.266 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.266 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:15:58.266 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:15:58.524 [2024-11-26 21:12:45.855164] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdc7ca0/0xbf0dd0) succeed. 00:15:58.524 [2024-11-26 21:12:45.863993] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdc5010/0xc32470) succeed. 00:15:58.524 [2024-11-26 21:12:45.904985] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:15:58.524 nvme0n1 nvme0n2 00:15:58.524 nvme1n1 00:15:58.524 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:15:58.783 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:15:58.783 21:12:45 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:06.895 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 7ee3bdd3-fa9a-4a89-82b2-50f155c2054d 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7ee3bdd3fa9a4a8982b250f155c2054d 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7EE3BDD3FA9A4A8982B250F155C2054D 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 7EE3BDD3FA9A4A8982B250F155C2054D == \7\E\E\3\B\D\D\3\F\A\9\A\4\A\8\9\8\2\B\2\5\0\F\1\5\5\C\2\0\5\4\D ]] 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid d9007ba9-7a13-4946-b1a8-1f8704c80945 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d9007ba97a134946b1a81f8704c80945 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D9007BA97A134946B1A81F8704C80945 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ D9007BA97A134946B1A81F8704C80945 == \D\9\0\0\7\B\A\9\7\A\1\3\4\9\4\6\B\1\A\8\1\F\8\7\0\4\C\8\0\9\4\5 ]] 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d5aa5953-14b0-4e90-a805-b8b108338857 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:16:06.896 21:12:52 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d5aa595314b04e90a805b8b108338857 00:16:06.896 21:12:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D5AA595314B04E90A805B8B108338857 00:16:06.896 21:12:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D5AA595314B04E90A805B8B108338857 == \D\5\A\A\5\9\5\3\1\4\B\0\4\E\9\0\A\8\0\5\B\8\B\1\0\8\3\3\8\8\5\7 ]] 00:16:06.896 21:12:53 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:16:13.446 21:12:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:13.446 21:12:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:16:13.446 21:12:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3907613 00:16:13.446 21:12:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3907613 ']' 00:16:13.446 21:12:59 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3907613 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3907613 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3907613' 00:16:13.446 killing process with pid 3907613 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3907613 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3907613 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:13.446 rmmod nvme_rdma 00:16:13.446 rmmod nvme_fabrics 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3907497 ']' 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3907497 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3907497 ']' 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3907497 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3907497 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3907497' 00:16:13.446 killing process with pid 3907497 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3907497 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3907497 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:13.446 00:16:13.446 real 0m21.426s 00:16:13.446 user 0m32.290s 00:16:13.446 sys 0m5.220s 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:16:13.446 ************************************ 00:16:13.446 END TEST nvmf_nsid 00:16:13.446 ************************************ 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:16:13.446 00:16:13.446 real 7m7.751s 00:16:13.446 user 17m21.046s 00:16:13.446 sys 1m50.391s 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.446 21:13:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.446 ************************************ 00:16:13.446 END TEST nvmf_target_extra 00:16:13.446 ************************************ 00:16:13.446 21:13:00 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:16:13.446 21:13:00 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.446 21:13:00 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.446 21:13:00 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:16:13.446 ************************************ 00:16:13.446 START TEST nvmf_host 00:16:13.446 ************************************ 00:16:13.446 21:13:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:16:13.705 * Looking for test storage... 00:16:13.705 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:13.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.705 --rc genhtml_branch_coverage=1 00:16:13.705 --rc genhtml_function_coverage=1 00:16:13.705 --rc genhtml_legend=1 00:16:13.705 --rc geninfo_all_blocks=1 00:16:13.705 --rc geninfo_unexecuted_blocks=1 00:16:13.705 00:16:13.705 ' 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:13.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.705 --rc genhtml_branch_coverage=1 00:16:13.705 --rc genhtml_function_coverage=1 00:16:13.705 --rc genhtml_legend=1 00:16:13.705 --rc geninfo_all_blocks=1 00:16:13.705 --rc geninfo_unexecuted_blocks=1 00:16:13.705 00:16:13.705 ' 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:13.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.705 --rc genhtml_branch_coverage=1 00:16:13.705 --rc genhtml_function_coverage=1 00:16:13.705 --rc genhtml_legend=1 00:16:13.705 --rc geninfo_all_blocks=1 00:16:13.705 --rc geninfo_unexecuted_blocks=1 00:16:13.705 00:16:13.705 ' 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:13.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.705 --rc genhtml_branch_coverage=1 00:16:13.705 --rc genhtml_function_coverage=1 00:16:13.705 --rc genhtml_legend=1 00:16:13.705 --rc geninfo_all_blocks=1 00:16:13.705 --rc geninfo_unexecuted_blocks=1 00:16:13.705 00:16:13.705 ' 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.705 21:13:00 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.705 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.705 21:13:01 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.706 ************************************ 00:16:13.706 START TEST nvmf_multicontroller 00:16:13.706 ************************************ 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:16:13.706 * Looking for test storage... 00:16:13.706 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:16:13.706 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:13.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.964 --rc genhtml_branch_coverage=1 00:16:13.964 --rc genhtml_function_coverage=1 00:16:13.964 --rc genhtml_legend=1 00:16:13.964 --rc geninfo_all_blocks=1 00:16:13.964 --rc geninfo_unexecuted_blocks=1 00:16:13.964 00:16:13.964 ' 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:13.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.964 --rc genhtml_branch_coverage=1 00:16:13.964 --rc genhtml_function_coverage=1 00:16:13.964 --rc genhtml_legend=1 00:16:13.964 --rc geninfo_all_blocks=1 00:16:13.964 --rc geninfo_unexecuted_blocks=1 00:16:13.964 00:16:13.964 ' 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:13.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.964 --rc genhtml_branch_coverage=1 00:16:13.964 --rc genhtml_function_coverage=1 00:16:13.964 --rc genhtml_legend=1 00:16:13.964 --rc geninfo_all_blocks=1 00:16:13.964 --rc geninfo_unexecuted_blocks=1 00:16:13.964 00:16:13.964 ' 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:13.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.964 --rc genhtml_branch_coverage=1 00:16:13.964 --rc genhtml_function_coverage=1 00:16:13.964 --rc genhtml_legend=1 00:16:13.964 --rc geninfo_all_blocks=1 00:16:13.964 --rc geninfo_unexecuted_blocks=1 00:16:13.964 00:16:13.964 ' 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.964 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.965 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:16:13.965 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:16:13.965 00:16:13.965 real 0m0.201s 00:16:13.965 user 0m0.114s 00:16:13.965 sys 0m0.100s 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:16:13.965 ************************************ 00:16:13.965 END TEST nvmf_multicontroller 00:16:13.965 ************************************ 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:13.965 ************************************ 00:16:13.965 START TEST nvmf_aer 00:16:13.965 ************************************ 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:16:13.965 * Looking for test storage... 00:16:13.965 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:16:13.965 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.223 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:14.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.224 --rc genhtml_branch_coverage=1 00:16:14.224 --rc genhtml_function_coverage=1 00:16:14.224 --rc genhtml_legend=1 00:16:14.224 --rc geninfo_all_blocks=1 00:16:14.224 --rc geninfo_unexecuted_blocks=1 00:16:14.224 00:16:14.224 ' 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:14.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.224 --rc genhtml_branch_coverage=1 00:16:14.224 --rc genhtml_function_coverage=1 00:16:14.224 --rc genhtml_legend=1 00:16:14.224 --rc geninfo_all_blocks=1 00:16:14.224 --rc geninfo_unexecuted_blocks=1 00:16:14.224 00:16:14.224 ' 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:14.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.224 --rc genhtml_branch_coverage=1 00:16:14.224 --rc genhtml_function_coverage=1 00:16:14.224 --rc genhtml_legend=1 00:16:14.224 --rc geninfo_all_blocks=1 00:16:14.224 --rc geninfo_unexecuted_blocks=1 00:16:14.224 00:16:14.224 ' 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:14.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.224 --rc genhtml_branch_coverage=1 00:16:14.224 --rc genhtml_function_coverage=1 00:16:14.224 --rc genhtml_legend=1 00:16:14.224 --rc geninfo_all_blocks=1 00:16:14.224 --rc geninfo_unexecuted_blocks=1 00:16:14.224 00:16:14.224 ' 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:14.224 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:16:14.224 21:13:01 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.776 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.776 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:16:20.776 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:20.777 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:20.777 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:20.777 Found net devices under 0000:18:00.0: mlx_0_0 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:20.777 Found net devices under 0000:18:00.1: mlx_0_1 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:20.777 21:13:06 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:20.777 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:20.777 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:20.777 altname enp24s0f0np0 00:16:20.777 altname ens785f0np0 00:16:20.777 inet 192.168.100.8/24 scope global mlx_0_0 00:16:20.777 valid_lft forever preferred_lft forever 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:20.777 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:20.777 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:20.777 altname enp24s0f1np1 00:16:20.777 altname ens785f1np1 00:16:20.777 inet 192.168.100.9/24 scope global mlx_0_1 00:16:20.777 valid_lft forever preferred_lft forever 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:20.777 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:20.777 192.168.100.9' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:20.778 192.168.100.9' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:20.778 192.168.100.9' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3913854 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3913854 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3913854 ']' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 [2024-11-26 21:13:07.241950] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:16:20.778 [2024-11-26 21:13:07.241999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.778 [2024-11-26 21:13:07.301500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.778 [2024-11-26 21:13:07.344072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.778 [2024-11-26 21:13:07.344107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.778 [2024-11-26 21:13:07.344114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.778 [2024-11-26 21:13:07.344120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.778 [2024-11-26 21:13:07.344124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.778 [2024-11-26 21:13:07.345582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.778 [2024-11-26 21:13:07.345677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.778 [2024-11-26 21:13:07.345768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.778 [2024-11-26 21:13:07.345769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 [2024-11-26 21:13:07.501871] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf73530/0xf77a20) succeed. 00:16:20.778 [2024-11-26 21:13:07.510090] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf74bc0/0xfb90c0) succeed. 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 Malloc0 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 [2024-11-26 21:13:07.680629] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 [ 00:16:20.778 { 00:16:20.778 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:20.778 "subtype": "Discovery", 00:16:20.778 "listen_addresses": [], 00:16:20.778 "allow_any_host": true, 00:16:20.778 "hosts": [] 00:16:20.778 }, 00:16:20.778 { 00:16:20.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.778 "subtype": "NVMe", 00:16:20.778 "listen_addresses": [ 00:16:20.778 { 00:16:20.778 "trtype": "RDMA", 00:16:20.778 "adrfam": "IPv4", 00:16:20.778 "traddr": "192.168.100.8", 00:16:20.778 "trsvcid": "4420" 00:16:20.778 } 00:16:20.778 ], 00:16:20.778 "allow_any_host": true, 00:16:20.778 "hosts": [], 00:16:20.778 "serial_number": "SPDK00000000000001", 00:16:20.778 "model_number": "SPDK bdev Controller", 00:16:20.778 "max_namespaces": 2, 00:16:20.778 "min_cntlid": 1, 00:16:20.778 "max_cntlid": 65519, 00:16:20.778 "namespaces": [ 00:16:20.778 { 00:16:20.778 "nsid": 1, 00:16:20.778 "bdev_name": "Malloc0", 00:16:20.778 "name": "Malloc0", 00:16:20.778 "nguid": "E7CE0841FACE4B7B99036A8B7113056B", 00:16:20.778 "uuid": "e7ce0841-face-4b7b-9903-6a8b7113056b" 00:16:20.778 } 00:16:20.778 ] 00:16:20.778 } 00:16:20.778 ] 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3913958 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 Malloc1 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 [ 00:16:20.778 { 00:16:20.778 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:20.778 "subtype": "Discovery", 00:16:20.778 "listen_addresses": [], 00:16:20.778 "allow_any_host": true, 00:16:20.778 "hosts": [] 00:16:20.778 }, 00:16:20.778 { 00:16:20.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:20.778 "subtype": "NVMe", 00:16:20.778 "listen_addresses": [ 00:16:20.778 { 00:16:20.778 "trtype": "RDMA", 00:16:20.778 "adrfam": "IPv4", 00:16:20.778 "traddr": "192.168.100.8", 00:16:20.778 "trsvcid": "4420" 00:16:20.778 } 00:16:20.778 ], 00:16:20.778 "allow_any_host": true, 00:16:20.778 "hosts": [], 00:16:20.778 "serial_number": "SPDK00000000000001", 00:16:20.778 "model_number": "SPDK bdev Controller", 00:16:20.778 "max_namespaces": 2, 00:16:20.778 "min_cntlid": 1, 00:16:20.778 "max_cntlid": 65519, 00:16:20.778 "namespaces": [ 00:16:20.778 { 00:16:20.778 "nsid": 1, 00:16:20.778 "bdev_name": "Malloc0", 00:16:20.778 "name": "Malloc0", 00:16:20.778 "nguid": "E7CE0841FACE4B7B99036A8B7113056B", 00:16:20.778 "uuid": "e7ce0841-face-4b7b-9903-6a8b7113056b" 00:16:20.778 }, 00:16:20.778 { 00:16:20.778 "nsid": 2, 00:16:20.778 "bdev_name": "Malloc1", 00:16:20.778 "name": "Malloc1", 00:16:20.778 "nguid": "11739AD7EB6B4F9D963942DF8F5FB344", 00:16:20.778 "uuid": "11739ad7-eb6b-4f9d-9639-42df8f5fb344" 00:16:20.778 } 00:16:20.778 ] 00:16:20.778 } 00:16:20.778 ] 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:07 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3913958 00:16:20.778 Asynchronous Event Request test 00:16:20.778 Attaching to 192.168.100.8 00:16:20.778 Attached to 192.168.100.8 00:16:20.778 Registering asynchronous event callbacks... 00:16:20.778 Starting namespace attribute notice tests for all controllers... 00:16:20.778 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:20.778 aer_cb - Changed Namespace 00:16:20.778 Cleaning up... 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:16:20.778 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:20.779 rmmod nvme_rdma 00:16:20.779 rmmod nvme_fabrics 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3913854 ']' 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3913854 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3913854 ']' 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3913854 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3913854 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3913854' 00:16:20.779 killing process with pid 3913854 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3913854 00:16:20.779 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3913854 00:16:21.039 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:21.039 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:21.039 00:16:21.039 real 0m7.108s 00:16:21.039 user 0m5.801s 00:16:21.039 sys 0m4.733s 00:16:21.039 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:21.039 21:13:08 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:16:21.039 ************************************ 00:16:21.039 END TEST nvmf_aer 00:16:21.039 ************************************ 00:16:21.039 21:13:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:16:21.039 21:13:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:21.039 21:13:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:21.039 21:13:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:21.297 ************************************ 00:16:21.297 START TEST nvmf_async_init 00:16:21.297 ************************************ 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:16:21.297 * Looking for test storage... 00:16:21.297 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:21.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.297 --rc genhtml_branch_coverage=1 00:16:21.297 --rc genhtml_function_coverage=1 00:16:21.297 --rc genhtml_legend=1 00:16:21.297 --rc geninfo_all_blocks=1 00:16:21.297 --rc geninfo_unexecuted_blocks=1 00:16:21.297 00:16:21.297 ' 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:21.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.297 --rc genhtml_branch_coverage=1 00:16:21.297 --rc genhtml_function_coverage=1 00:16:21.297 --rc genhtml_legend=1 00:16:21.297 --rc geninfo_all_blocks=1 00:16:21.297 --rc geninfo_unexecuted_blocks=1 00:16:21.297 00:16:21.297 ' 00:16:21.297 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:21.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.297 --rc genhtml_branch_coverage=1 00:16:21.297 --rc genhtml_function_coverage=1 00:16:21.297 --rc genhtml_legend=1 00:16:21.297 --rc geninfo_all_blocks=1 00:16:21.297 --rc geninfo_unexecuted_blocks=1 00:16:21.298 00:16:21.298 ' 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:21.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.298 --rc genhtml_branch_coverage=1 00:16:21.298 --rc genhtml_function_coverage=1 00:16:21.298 --rc genhtml_legend=1 00:16:21.298 --rc geninfo_all_blocks=1 00:16:21.298 --rc geninfo_unexecuted_blocks=1 00:16:21.298 00:16:21.298 ' 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:21.298 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4cdecb11db55497b82fb81be505d8563 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:16:21.298 21:13:08 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:16:26.565 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:26.566 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:26.566 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:26.566 Found net devices under 0000:18:00.0: mlx_0_0 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:26.566 Found net devices under 0000:18:00.1: mlx_0_1 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:26.566 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:26.566 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:26.566 altname enp24s0f0np0 00:16:26.566 altname ens785f0np0 00:16:26.566 inet 192.168.100.8/24 scope global mlx_0_0 00:16:26.566 valid_lft forever preferred_lft forever 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:26.566 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:26.825 21:13:13 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:26.825 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:26.825 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:26.825 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:26.825 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:26.825 altname enp24s0f1np1 00:16:26.825 altname ens785f1np1 00:16:26.826 inet 192.168.100.9/24 scope global mlx_0_1 00:16:26.826 valid_lft forever preferred_lft forever 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:26.826 192.168.100.9' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:26.826 192.168.100.9' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:26.826 192.168.100.9' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3917219 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3917219 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3917219 ']' 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:26.826 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:26.826 [2024-11-26 21:13:14.139687] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:16:26.826 [2024-11-26 21:13:14.139729] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.826 [2024-11-26 21:13:14.197786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.826 [2024-11-26 21:13:14.235813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.826 [2024-11-26 21:13:14.235846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.826 [2024-11-26 21:13:14.235853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.826 [2024-11-26 21:13:14.235858] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.826 [2024-11-26 21:13:14.235863] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.826 [2024-11-26 21:13:14.236329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 [2024-11-26 21:13:14.384926] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7552e0/0x7597d0) succeed. 00:16:27.085 [2024-11-26 21:13:14.392850] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x756790/0x79ae70) succeed. 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 null0 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4cdecb11db55497b82fb81be505d8563 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.085 [2024-11-26 21:13:14.457158] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.085 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.344 nvme0n1 00:16:27.344 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.344 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:27.344 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.344 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.344 [ 00:16:27.344 { 00:16:27.344 "name": "nvme0n1", 00:16:27.344 "aliases": [ 00:16:27.344 "4cdecb11-db55-497b-82fb-81be505d8563" 00:16:27.344 ], 00:16:27.344 "product_name": "NVMe disk", 00:16:27.344 "block_size": 512, 00:16:27.344 "num_blocks": 2097152, 00:16:27.344 "uuid": "4cdecb11-db55-497b-82fb-81be505d8563", 00:16:27.344 "numa_id": 0, 00:16:27.344 "assigned_rate_limits": { 00:16:27.344 "rw_ios_per_sec": 0, 00:16:27.344 "rw_mbytes_per_sec": 0, 00:16:27.344 "r_mbytes_per_sec": 0, 00:16:27.344 "w_mbytes_per_sec": 0 00:16:27.344 }, 00:16:27.344 "claimed": false, 00:16:27.344 "zoned": false, 00:16:27.344 "supported_io_types": { 00:16:27.344 "read": true, 00:16:27.344 "write": true, 00:16:27.345 "unmap": false, 00:16:27.345 "flush": true, 00:16:27.345 "reset": true, 00:16:27.345 "nvme_admin": true, 00:16:27.345 "nvme_io": true, 00:16:27.345 "nvme_io_md": false, 00:16:27.345 "write_zeroes": true, 00:16:27.345 "zcopy": false, 00:16:27.345 "get_zone_info": false, 00:16:27.345 "zone_management": false, 00:16:27.345 "zone_append": false, 00:16:27.345 "compare": true, 00:16:27.345 "compare_and_write": true, 00:16:27.345 "abort": true, 00:16:27.345 "seek_hole": false, 00:16:27.345 "seek_data": false, 00:16:27.345 "copy": true, 00:16:27.345 "nvme_iov_md": false 00:16:27.345 }, 00:16:27.345 "memory_domains": [ 00:16:27.345 { 00:16:27.345 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:27.345 "dma_device_type": 0 00:16:27.345 } 00:16:27.345 ], 00:16:27.345 "driver_specific": { 00:16:27.345 "nvme": [ 00:16:27.345 { 00:16:27.345 "trid": { 00:16:27.345 "trtype": "RDMA", 00:16:27.345 "adrfam": "IPv4", 00:16:27.345 "traddr": "192.168.100.8", 00:16:27.345 "trsvcid": "4420", 00:16:27.345 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:27.345 }, 00:16:27.345 "ctrlr_data": { 00:16:27.345 "cntlid": 1, 00:16:27.345 "vendor_id": "0x8086", 00:16:27.345 "model_number": "SPDK bdev Controller", 00:16:27.345 "serial_number": "00000000000000000000", 00:16:27.345 "firmware_revision": "25.01", 00:16:27.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:27.345 "oacs": { 00:16:27.345 "security": 0, 00:16:27.345 "format": 0, 00:16:27.345 "firmware": 0, 00:16:27.345 "ns_manage": 0 00:16:27.345 }, 00:16:27.345 "multi_ctrlr": true, 00:16:27.345 "ana_reporting": false 00:16:27.345 }, 00:16:27.345 "vs": { 00:16:27.345 "nvme_version": "1.3" 00:16:27.345 }, 00:16:27.345 "ns_data": { 00:16:27.345 "id": 1, 00:16:27.345 "can_share": true 00:16:27.345 } 00:16:27.345 } 00:16:27.345 ], 00:16:27.345 "mp_policy": "active_passive" 00:16:27.345 } 00:16:27.345 } 00:16:27.345 ] 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.345 [2024-11-26 21:13:14.543857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:27.345 [2024-11-26 21:13:14.566428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:16:27.345 [2024-11-26 21:13:14.586557] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.345 [ 00:16:27.345 { 00:16:27.345 "name": "nvme0n1", 00:16:27.345 "aliases": [ 00:16:27.345 "4cdecb11-db55-497b-82fb-81be505d8563" 00:16:27.345 ], 00:16:27.345 "product_name": "NVMe disk", 00:16:27.345 "block_size": 512, 00:16:27.345 "num_blocks": 2097152, 00:16:27.345 "uuid": "4cdecb11-db55-497b-82fb-81be505d8563", 00:16:27.345 "numa_id": 0, 00:16:27.345 "assigned_rate_limits": { 00:16:27.345 "rw_ios_per_sec": 0, 00:16:27.345 "rw_mbytes_per_sec": 0, 00:16:27.345 "r_mbytes_per_sec": 0, 00:16:27.345 "w_mbytes_per_sec": 0 00:16:27.345 }, 00:16:27.345 "claimed": false, 00:16:27.345 "zoned": false, 00:16:27.345 "supported_io_types": { 00:16:27.345 "read": true, 00:16:27.345 "write": true, 00:16:27.345 "unmap": false, 00:16:27.345 "flush": true, 00:16:27.345 "reset": true, 00:16:27.345 "nvme_admin": true, 00:16:27.345 "nvme_io": true, 00:16:27.345 "nvme_io_md": false, 00:16:27.345 "write_zeroes": true, 00:16:27.345 "zcopy": false, 00:16:27.345 "get_zone_info": false, 00:16:27.345 "zone_management": false, 00:16:27.345 "zone_append": false, 00:16:27.345 "compare": true, 00:16:27.345 "compare_and_write": true, 00:16:27.345 "abort": true, 00:16:27.345 "seek_hole": false, 00:16:27.345 "seek_data": false, 00:16:27.345 "copy": true, 00:16:27.345 "nvme_iov_md": false 00:16:27.345 }, 00:16:27.345 "memory_domains": [ 00:16:27.345 { 00:16:27.345 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:27.345 "dma_device_type": 0 00:16:27.345 } 00:16:27.345 ], 00:16:27.345 "driver_specific": { 00:16:27.345 "nvme": [ 00:16:27.345 { 00:16:27.345 "trid": { 00:16:27.345 "trtype": "RDMA", 00:16:27.345 "adrfam": "IPv4", 00:16:27.345 "traddr": "192.168.100.8", 00:16:27.345 "trsvcid": "4420", 00:16:27.345 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:27.345 }, 00:16:27.345 "ctrlr_data": { 00:16:27.345 "cntlid": 2, 00:16:27.345 "vendor_id": "0x8086", 00:16:27.345 "model_number": "SPDK bdev Controller", 00:16:27.345 "serial_number": "00000000000000000000", 00:16:27.345 "firmware_revision": "25.01", 00:16:27.345 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:27.345 "oacs": { 00:16:27.345 "security": 0, 00:16:27.345 "format": 0, 00:16:27.345 "firmware": 0, 00:16:27.345 "ns_manage": 0 00:16:27.345 }, 00:16:27.345 "multi_ctrlr": true, 00:16:27.345 "ana_reporting": false 00:16:27.345 }, 00:16:27.345 "vs": { 00:16:27.345 "nvme_version": "1.3" 00:16:27.345 }, 00:16:27.345 "ns_data": { 00:16:27.345 "id": 1, 00:16:27.345 "can_share": true 00:16:27.345 } 00:16:27.345 } 00:16:27.345 ], 00:16:27.345 "mp_policy": "active_passive" 00:16:27.345 } 00:16:27.345 } 00:16:27.345 ] 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.bGuyWNQ12u 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.bGuyWNQ12u 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.bGuyWNQ12u 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.345 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.346 [2024-11-26 21:13:14.660898] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.346 [2024-11-26 21:13:14.676937] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:27.346 nvme0n1 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.346 [ 00:16:27.346 { 00:16:27.346 "name": "nvme0n1", 00:16:27.346 "aliases": [ 00:16:27.346 "4cdecb11-db55-497b-82fb-81be505d8563" 00:16:27.346 ], 00:16:27.346 "product_name": "NVMe disk", 00:16:27.346 "block_size": 512, 00:16:27.346 "num_blocks": 2097152, 00:16:27.346 "uuid": "4cdecb11-db55-497b-82fb-81be505d8563", 00:16:27.346 "numa_id": 0, 00:16:27.346 "assigned_rate_limits": { 00:16:27.346 "rw_ios_per_sec": 0, 00:16:27.346 "rw_mbytes_per_sec": 0, 00:16:27.346 "r_mbytes_per_sec": 0, 00:16:27.346 "w_mbytes_per_sec": 0 00:16:27.346 }, 00:16:27.346 "claimed": false, 00:16:27.346 "zoned": false, 00:16:27.346 "supported_io_types": { 00:16:27.346 "read": true, 00:16:27.346 "write": true, 00:16:27.346 "unmap": false, 00:16:27.346 "flush": true, 00:16:27.346 "reset": true, 00:16:27.346 "nvme_admin": true, 00:16:27.346 "nvme_io": true, 00:16:27.346 "nvme_io_md": false, 00:16:27.346 "write_zeroes": true, 00:16:27.346 "zcopy": false, 00:16:27.346 "get_zone_info": false, 00:16:27.346 "zone_management": false, 00:16:27.346 "zone_append": false, 00:16:27.346 "compare": true, 00:16:27.346 "compare_and_write": true, 00:16:27.346 "abort": true, 00:16:27.346 "seek_hole": false, 00:16:27.346 "seek_data": false, 00:16:27.346 "copy": true, 00:16:27.346 "nvme_iov_md": false 00:16:27.346 }, 00:16:27.346 "memory_domains": [ 00:16:27.346 { 00:16:27.346 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:16:27.346 "dma_device_type": 0 00:16:27.346 } 00:16:27.346 ], 00:16:27.346 "driver_specific": { 00:16:27.346 "nvme": [ 00:16:27.346 { 00:16:27.346 "trid": { 00:16:27.346 "trtype": "RDMA", 00:16:27.346 "adrfam": "IPv4", 00:16:27.346 "traddr": "192.168.100.8", 00:16:27.346 "trsvcid": "4421", 00:16:27.346 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:27.346 }, 00:16:27.346 "ctrlr_data": { 00:16:27.346 "cntlid": 3, 00:16:27.346 "vendor_id": "0x8086", 00:16:27.346 "model_number": "SPDK bdev Controller", 00:16:27.346 "serial_number": "00000000000000000000", 00:16:27.346 "firmware_revision": "25.01", 00:16:27.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:27.346 "oacs": { 00:16:27.346 "security": 0, 00:16:27.346 "format": 0, 00:16:27.346 "firmware": 0, 00:16:27.346 "ns_manage": 0 00:16:27.346 }, 00:16:27.346 "multi_ctrlr": true, 00:16:27.346 "ana_reporting": false 00:16:27.346 }, 00:16:27.346 "vs": { 00:16:27.346 "nvme_version": "1.3" 00:16:27.346 }, 00:16:27.346 "ns_data": { 00:16:27.346 "id": 1, 00:16:27.346 "can_share": true 00:16:27.346 } 00:16:27.346 } 00:16:27.346 ], 00:16:27.346 "mp_policy": "active_passive" 00:16:27.346 } 00:16:27.346 } 00:16:27.346 ] 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.346 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.bGuyWNQ12u 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:27.605 rmmod nvme_rdma 00:16:27.605 rmmod nvme_fabrics 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3917219 ']' 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3917219 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3917219 ']' 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3917219 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3917219 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3917219' 00:16:27.605 killing process with pid 3917219 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3917219 00:16:27.605 21:13:14 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3917219 00:16:27.863 21:13:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:27.863 21:13:15 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:27.863 00:16:27.863 real 0m6.585s 00:16:27.863 user 0m2.615s 00:16:27.863 sys 0m4.383s 00:16:27.863 21:13:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.863 21:13:15 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:16:27.863 ************************************ 00:16:27.863 END TEST nvmf_async_init 00:16:27.863 ************************************ 00:16:27.863 21:13:15 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:16:27.863 21:13:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:27.863 21:13:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.863 21:13:15 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:27.863 ************************************ 00:16:27.863 START TEST dma 00:16:27.863 ************************************ 00:16:27.863 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:16:27.863 * Looking for test storage... 00:16:27.863 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:27.863 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:27.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.864 --rc genhtml_branch_coverage=1 00:16:27.864 --rc genhtml_function_coverage=1 00:16:27.864 --rc genhtml_legend=1 00:16:27.864 --rc geninfo_all_blocks=1 00:16:27.864 --rc geninfo_unexecuted_blocks=1 00:16:27.864 00:16:27.864 ' 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:27.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.864 --rc genhtml_branch_coverage=1 00:16:27.864 --rc genhtml_function_coverage=1 00:16:27.864 --rc genhtml_legend=1 00:16:27.864 --rc geninfo_all_blocks=1 00:16:27.864 --rc geninfo_unexecuted_blocks=1 00:16:27.864 00:16:27.864 ' 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:27.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.864 --rc genhtml_branch_coverage=1 00:16:27.864 --rc genhtml_function_coverage=1 00:16:27.864 --rc genhtml_legend=1 00:16:27.864 --rc geninfo_all_blocks=1 00:16:27.864 --rc geninfo_unexecuted_blocks=1 00:16:27.864 00:16:27.864 ' 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:27.864 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.864 --rc genhtml_branch_coverage=1 00:16:27.864 --rc genhtml_function_coverage=1 00:16:27.864 --rc genhtml_legend=1 00:16:27.864 --rc geninfo_all_blocks=1 00:16:27.864 --rc geninfo_unexecuted_blocks=1 00:16:27.864 00:16:27.864 ' 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.864 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:28.123 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:16:28.123 21:13:15 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:33.390 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:16:33.391 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:16:33.391 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:33.391 Found net devices under 0000:18:00.0: mlx_0_0 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:33.391 Found net devices under 0000:18:00.1: mlx_0_1 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:33.391 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:33.391 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:16:33.391 altname enp24s0f0np0 00:16:33.391 altname ens785f0np0 00:16:33.391 inet 192.168.100.8/24 scope global mlx_0_0 00:16:33.391 valid_lft forever preferred_lft forever 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:33.391 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:33.391 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:16:33.391 altname enp24s0f1np1 00:16:33.391 altname ens785f1np1 00:16:33.391 inet 192.168.100.9/24 scope global mlx_0_1 00:16:33.391 valid_lft forever preferred_lft forever 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:33.391 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:16:33.392 192.168.100.9' 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:16:33.392 192.168.100.9' 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:16:33.392 192.168.100.9' 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=3920660 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 3920660 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 3920660 ']' 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:33.392 [2024-11-26 21:13:20.524704] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:16:33.392 [2024-11-26 21:13:20.524748] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.392 [2024-11-26 21:13:20.583817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:33.392 [2024-11-26 21:13:20.622373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.392 [2024-11-26 21:13:20.622407] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.392 [2024-11-26 21:13:20.622414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.392 [2024-11-26 21:13:20.622419] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.392 [2024-11-26 21:13:20.622425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.392 [2024-11-26 21:13:20.623506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.392 [2024-11-26 21:13:20.623510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.392 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:33.392 [2024-11-26 21:13:20.771637] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb13e70/0xb18360) succeed. 00:16:33.392 [2024-11-26 21:13:20.779512] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb153c0/0xb59a00) succeed. 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:33.651 Malloc0 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:33.651 [2024-11-26 21:13:20.918800] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:33.651 { 00:16:33.651 "params": { 00:16:33.651 "name": "Nvme$subsystem", 00:16:33.651 "trtype": "$TEST_TRANSPORT", 00:16:33.651 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:33.651 "adrfam": "ipv4", 00:16:33.651 "trsvcid": "$NVMF_PORT", 00:16:33.651 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:33.651 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:33.651 "hdgst": ${hdgst:-false}, 00:16:33.651 "ddgst": ${ddgst:-false} 00:16:33.651 }, 00:16:33.651 "method": "bdev_nvme_attach_controller" 00:16:33.651 } 00:16:33.651 EOF 00:16:33.651 )") 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:16:33.651 21:13:20 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:33.651 "params": { 00:16:33.651 "name": "Nvme0", 00:16:33.651 "trtype": "rdma", 00:16:33.651 "traddr": "192.168.100.8", 00:16:33.651 "adrfam": "ipv4", 00:16:33.651 "trsvcid": "4420", 00:16:33.651 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:33.651 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:33.651 "hdgst": false, 00:16:33.651 "ddgst": false 00:16:33.651 }, 00:16:33.651 "method": "bdev_nvme_attach_controller" 00:16:33.651 }' 00:16:33.651 [2024-11-26 21:13:20.967083] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:16:33.651 [2024-11-26 21:13:20.967119] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920763 ] 00:16:33.651 [2024-11-26 21:13:21.020878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:33.651 [2024-11-26 21:13:21.059926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:33.651 [2024-11-26 21:13:21.059930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.211 bdev Nvme0n1 reports 1 memory domains 00:16:40.211 bdev Nvme0n1 supports RDMA memory domain 00:16:40.211 Initialization complete, running randrw IO for 5 sec on 2 cores 00:16:40.211 ========================================================================== 00:16:40.211 Latency [us] 00:16:40.211 IOPS MiB/s Average min max 00:16:40.211 Core 2: 22968.46 89.72 695.81 226.59 8268.98 00:16:40.211 Core 3: 23043.84 90.01 693.56 234.23 8182.68 00:16:40.211 ========================================================================== 00:16:40.211 Total : 46012.30 179.74 694.68 226.59 8268.98 00:16:40.211 00:16:40.211 Total operations: 230147, translate 230147 pull_push 0 memzero 0 00:16:40.211 21:13:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:16:40.211 21:13:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:16:40.211 21:13:26 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:16:40.211 [2024-11-26 21:13:26.464890] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:16:40.211 [2024-11-26 21:13:26.464934] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921778 ] 00:16:40.211 [2024-11-26 21:13:26.519137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:40.211 [2024-11-26 21:13:26.557438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.211 [2024-11-26 21:13:26.557442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.391 bdev Malloc0 reports 2 memory domains 00:16:44.391 bdev Malloc0 doesn't support RDMA memory domain 00:16:44.391 Initialization complete, running randrw IO for 5 sec on 2 cores 00:16:44.391 ========================================================================== 00:16:44.391 Latency [us] 00:16:44.391 IOPS MiB/s Average min max 00:16:44.391 Core 2: 15164.78 59.24 1054.42 372.61 1354.81 00:16:44.391 Core 3: 15358.52 59.99 1041.10 373.09 2174.26 00:16:44.391 ========================================================================== 00:16:44.391 Total : 30523.29 119.23 1047.72 372.61 2174.26 00:16:44.391 00:16:44.391 Total operations: 152665, translate 0 pull_push 610660 memzero 0 00:16:44.391 21:13:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:16:44.391 21:13:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:16:44.391 21:13:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:16:44.391 21:13:31 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:16:44.652 Ignoring -M option 00:16:44.652 [2024-11-26 21:13:31.865826] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:16:44.652 [2024-11-26 21:13:31.865878] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922610 ] 00:16:44.652 [2024-11-26 21:13:31.921714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:44.652 [2024-11-26 21:13:31.957694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.652 [2024-11-26 21:13:31.957698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.020 bdev f0e2efaf-53e9-430d-adad-e278463668a2 reports 1 memory domains 00:16:50.020 bdev f0e2efaf-53e9-430d-adad-e278463668a2 supports RDMA memory domain 00:16:50.020 Initialization complete, running randread IO for 5 sec on 2 cores 00:16:50.020 ========================================================================== 00:16:50.020 Latency [us] 00:16:50.020 IOPS MiB/s Average min max 00:16:50.020 Core 2: 71287.27 278.47 223.53 52.71 3082.59 00:16:50.020 Core 3: 73899.41 288.67 215.62 59.03 3020.35 00:16:50.020 ========================================================================== 00:16:50.020 Total : 145186.68 567.14 219.50 52.71 3082.59 00:16:50.020 00:16:50.020 Total operations: 726006, translate 0 pull_push 0 memzero 726006 00:16:50.020 21:13:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:16:50.278 [2024-11-26 21:13:37.475993] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:52.807 Initializing NVMe Controllers 00:16:52.807 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:16:52.808 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:52.808 Initialization complete. Launching workers. 00:16:52.808 ======================================================== 00:16:52.808 Latency(us) 00:16:52.808 Device Information : IOPS MiB/s Average min max 00:16:52.808 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 1980.83 7.74 8077.08 7956.17 11976.08 00:16:52.808 ======================================================== 00:16:52.808 Total : 1980.83 7.74 8077.08 7956.17 11976.08 00:16:52.808 00:16:52.808 21:13:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:16:52.808 21:13:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:16:52.808 21:13:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:16:52.808 21:13:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:16:52.808 [2024-11-26 21:13:39.807154] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:16:52.808 [2024-11-26 21:13:39.807202] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924184 ] 00:16:52.808 [2024-11-26 21:13:39.861885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:52.808 [2024-11-26 21:13:39.900902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.808 [2024-11-26 21:13:39.900904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.074 bdev 4af6a220-7833-4585-8335-f6029b9192ec reports 1 memory domains 00:16:58.074 bdev 4af6a220-7833-4585-8335-f6029b9192ec supports RDMA memory domain 00:16:58.074 Initialization complete, running randrw IO for 5 sec on 2 cores 00:16:58.074 ========================================================================== 00:16:58.074 Latency [us] 00:16:58.074 IOPS MiB/s Average min max 00:16:58.074 Core 2: 20407.40 79.72 783.44 42.26 12021.06 00:16:58.074 Core 3: 20571.97 80.36 777.14 12.93 11644.87 00:16:58.074 ========================================================================== 00:16:58.074 Total : 40979.36 160.08 780.28 12.93 12021.06 00:16:58.074 00:16:58.074 Total operations: 204933, translate 204828 pull_push 0 memzero 105 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:58.074 rmmod nvme_rdma 00:16:58.074 rmmod nvme_fabrics 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:16:58.074 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 3920660 ']' 00:16:58.075 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 3920660 00:16:58.075 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 3920660 ']' 00:16:58.075 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 3920660 00:16:58.075 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:16:58.075 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:58.075 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3920660 00:16:58.075 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.075 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.075 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3920660' 00:16:58.075 killing process with pid 3920660 00:16:58.075 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 3920660 00:16:58.075 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 3920660 00:16:58.333 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:58.333 21:13:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:16:58.333 00:16:58.333 real 0m30.560s 00:16:58.333 user 1m34.010s 00:16:58.334 sys 0m4.840s 00:16:58.334 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:58.334 21:13:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:16:58.334 ************************************ 00:16:58.334 END TEST dma 00:16:58.334 ************************************ 00:16:58.334 21:13:45 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:16:58.334 21:13:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:58.334 21:13:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:58.334 21:13:45 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.334 ************************************ 00:16:58.334 START TEST nvmf_identify 00:16:58.334 ************************************ 00:16:58.334 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:16:58.592 * Looking for test storage... 00:16:58.592 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:16:58.592 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:58.592 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:16:58.592 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:58.592 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:58.592 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.592 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.592 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.592 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.592 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:58.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.593 --rc genhtml_branch_coverage=1 00:16:58.593 --rc genhtml_function_coverage=1 00:16:58.593 --rc genhtml_legend=1 00:16:58.593 --rc geninfo_all_blocks=1 00:16:58.593 --rc geninfo_unexecuted_blocks=1 00:16:58.593 00:16:58.593 ' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:58.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.593 --rc genhtml_branch_coverage=1 00:16:58.593 --rc genhtml_function_coverage=1 00:16:58.593 --rc genhtml_legend=1 00:16:58.593 --rc geninfo_all_blocks=1 00:16:58.593 --rc geninfo_unexecuted_blocks=1 00:16:58.593 00:16:58.593 ' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:58.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.593 --rc genhtml_branch_coverage=1 00:16:58.593 --rc genhtml_function_coverage=1 00:16:58.593 --rc genhtml_legend=1 00:16:58.593 --rc geninfo_all_blocks=1 00:16:58.593 --rc geninfo_unexecuted_blocks=1 00:16:58.593 00:16:58.593 ' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:58.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.593 --rc genhtml_branch_coverage=1 00:16:58.593 --rc genhtml_function_coverage=1 00:16:58.593 --rc genhtml_legend=1 00:16:58.593 --rc geninfo_all_blocks=1 00:16:58.593 --rc geninfo_unexecuted_blocks=1 00:16:58.593 00:16:58.593 ' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:58.593 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:16:58.593 21:13:45 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:03.873 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:03.874 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:03.874 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:03.874 Found net devices under 0000:18:00.0: mlx_0_0 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:03.874 Found net devices under 0000:18:00.1: mlx_0_1 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:03.874 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:03.874 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:03.874 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:03.874 altname enp24s0f0np0 00:17:03.874 altname ens785f0np0 00:17:03.875 inet 192.168.100.8/24 scope global mlx_0_0 00:17:03.875 valid_lft forever preferred_lft forever 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:03.875 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:03.875 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:03.875 altname enp24s0f1np1 00:17:03.875 altname ens785f1np1 00:17:03.875 inet 192.168.100.9/24 scope global mlx_0_1 00:17:03.875 valid_lft forever preferred_lft forever 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:03.875 192.168.100.9' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:03.875 192.168.100.9' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:03.875 192.168.100.9' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3928384 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3928384 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3928384 ']' 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:03.875 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:04.135 [2024-11-26 21:13:51.320555] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:17:04.135 [2024-11-26 21:13:51.320600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.135 [2024-11-26 21:13:51.376883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.135 [2024-11-26 21:13:51.414867] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.135 [2024-11-26 21:13:51.414904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.135 [2024-11-26 21:13:51.414911] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.135 [2024-11-26 21:13:51.414917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.135 [2024-11-26 21:13:51.414922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.135 [2024-11-26 21:13:51.416151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.135 [2024-11-26 21:13:51.416168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.136 [2024-11-26 21:13:51.416262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.136 [2024-11-26 21:13:51.416262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.136 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.136 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:04.136 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:04.136 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.136 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.136 [2024-11-26 21:13:51.540939] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b07530/0x1b0ba20) succeed. 00:17:04.136 [2024-11-26 21:13:51.549117] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b08bc0/0x1b4d0c0) succeed. 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.396 Malloc0 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.396 [2024-11-26 21:13:51.751731] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.396 [ 00:17:04.396 { 00:17:04.396 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:04.396 "subtype": "Discovery", 00:17:04.396 "listen_addresses": [ 00:17:04.396 { 00:17:04.396 "trtype": "RDMA", 00:17:04.396 "adrfam": "IPv4", 00:17:04.396 "traddr": "192.168.100.8", 00:17:04.396 "trsvcid": "4420" 00:17:04.396 } 00:17:04.396 ], 00:17:04.396 "allow_any_host": true, 00:17:04.396 "hosts": [] 00:17:04.396 }, 00:17:04.396 { 00:17:04.396 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.396 "subtype": "NVMe", 00:17:04.396 "listen_addresses": [ 00:17:04.396 { 00:17:04.396 "trtype": "RDMA", 00:17:04.396 "adrfam": "IPv4", 00:17:04.396 "traddr": "192.168.100.8", 00:17:04.396 "trsvcid": "4420" 00:17:04.396 } 00:17:04.396 ], 00:17:04.396 "allow_any_host": true, 00:17:04.396 "hosts": [], 00:17:04.396 "serial_number": "SPDK00000000000001", 00:17:04.396 "model_number": "SPDK bdev Controller", 00:17:04.396 "max_namespaces": 32, 00:17:04.396 "min_cntlid": 1, 00:17:04.396 "max_cntlid": 65519, 00:17:04.396 "namespaces": [ 00:17:04.396 { 00:17:04.396 "nsid": 1, 00:17:04.396 "bdev_name": "Malloc0", 00:17:04.396 "name": "Malloc0", 00:17:04.396 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:04.396 "eui64": "ABCDEF0123456789", 00:17:04.396 "uuid": "83d71056-6d2c-4b23-83c8-31a3b76776c8" 00:17:04.396 } 00:17:04.396 ] 00:17:04.396 } 00:17:04.396 ] 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.396 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:04.396 [2024-11-26 21:13:51.804166] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:17:04.396 [2024-11-26 21:13:51.804212] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928500 ] 00:17:04.665 [2024-11-26 21:13:51.858456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:04.665 [2024-11-26 21:13:51.858528] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:17:04.665 [2024-11-26 21:13:51.858540] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:17:04.665 [2024-11-26 21:13:51.858543] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:17:04.665 [2024-11-26 21:13:51.858570] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:04.665 [2024-11-26 21:13:51.868784] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:17:04.665 [2024-11-26 21:13:51.878467] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:04.665 [2024-11-26 21:13:51.878478] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:17:04.665 [2024-11-26 21:13:51.878483] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878488] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878492] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878496] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878500] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878504] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878508] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878511] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878515] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878519] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878523] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878529] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878533] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878537] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878541] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878545] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878549] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878553] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878557] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878560] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878564] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878568] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878572] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878576] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878580] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878583] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878587] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878591] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878595] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878599] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878603] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878606] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:17:04.665 [2024-11-26 21:13:51.878610] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:04.665 [2024-11-26 21:13:51.878613] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:17:04.665 [2024-11-26 21:13:51.878630] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.878641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x184c00 00:17:04.665 [2024-11-26 21:13:51.884259] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.665 [2024-11-26 21:13:51.884268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:04.665 [2024-11-26 21:13:51.884273] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184c00 00:17:04.665 [2024-11-26 21:13:51.884280] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:04.665 [2024-11-26 21:13:51.884286] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:04.665 [2024-11-26 21:13:51.884290] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:04.666 [2024-11-26 21:13:51.884301] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.666 [2024-11-26 21:13:51.884330] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.666 [2024-11-26 21:13:51.884334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:17:04.666 [2024-11-26 21:13:51.884339] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:04.666 [2024-11-26 21:13:51.884343] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884347] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:04.666 [2024-11-26 21:13:51.884353] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.666 [2024-11-26 21:13:51.884375] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.666 [2024-11-26 21:13:51.884379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:17:04.666 [2024-11-26 21:13:51.884383] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:04.666 [2024-11-26 21:13:51.884387] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:04.666 [2024-11-26 21:13:51.884397] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.666 [2024-11-26 21:13:51.884421] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.666 [2024-11-26 21:13:51.884425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:04.666 [2024-11-26 21:13:51.884430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:04.666 [2024-11-26 21:13:51.884434] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884439] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.666 [2024-11-26 21:13:51.884465] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.666 [2024-11-26 21:13:51.884469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:04.666 [2024-11-26 21:13:51.884473] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:04.666 [2024-11-26 21:13:51.884476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:04.666 [2024-11-26 21:13:51.884480] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:04.666 [2024-11-26 21:13:51.884593] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:04.666 [2024-11-26 21:13:51.884597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:04.666 [2024-11-26 21:13:51.884603] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.666 [2024-11-26 21:13:51.884627] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.666 [2024-11-26 21:13:51.884631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:04.666 [2024-11-26 21:13:51.884635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:04.666 [2024-11-26 21:13:51.884639] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884645] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.666 [2024-11-26 21:13:51.884666] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.666 [2024-11-26 21:13:51.884670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:04.666 [2024-11-26 21:13:51.884673] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:04.666 [2024-11-26 21:13:51.884677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:04.666 [2024-11-26 21:13:51.884681] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884686] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:04.666 [2024-11-26 21:13:51.884692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:04.666 [2024-11-26 21:13:51.884699] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184c00 00:17:04.666 [2024-11-26 21:13:51.884737] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.666 [2024-11-26 21:13:51.884741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:04.666 [2024-11-26 21:13:51.884747] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:04.666 [2024-11-26 21:13:51.884751] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:04.666 [2024-11-26 21:13:51.884755] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:04.666 [2024-11-26 21:13:51.884760] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:04.666 [2024-11-26 21:13:51.884764] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:04.666 [2024-11-26 21:13:51.884768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:04.666 [2024-11-26 21:13:51.884773] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:04.666 [2024-11-26 21:13:51.884783] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884789] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.666 [2024-11-26 21:13:51.884807] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.666 [2024-11-26 21:13:51.884811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:04.666 [2024-11-26 21:13:51.884817] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.666 [2024-11-26 21:13:51.884827] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.666 [2024-11-26 21:13:51.884836] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.666 [2024-11-26 21:13:51.884845] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.666 [2024-11-26 21:13:51.884854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:04.666 [2024-11-26 21:13:51.884858] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884864] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:04.666 [2024-11-26 21:13:51.884869] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884874] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.666 [2024-11-26 21:13:51.884888] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.666 [2024-11-26 21:13:51.884892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:17:04.666 [2024-11-26 21:13:51.884897] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:04.666 [2024-11-26 21:13:51.884901] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:04.666 [2024-11-26 21:13:51.884904] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884911] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.666 [2024-11-26 21:13:51.884916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184c00 00:17:04.666 [2024-11-26 21:13:51.884940] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.667 [2024-11-26 21:13:51.884944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:04.667 [2024-11-26 21:13:51.884949] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x184c00 00:17:04.667 [2024-11-26 21:13:51.884955] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:04.667 [2024-11-26 21:13:51.884975] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.667 [2024-11-26 21:13:51.884981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x184c00 00:17:04.667 [2024-11-26 21:13:51.884986] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x184c00 00:17:04.667 [2024-11-26 21:13:51.884991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.667 [2024-11-26 21:13:51.885013] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.667 [2024-11-26 21:13:51.885018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:04.667 [2024-11-26 21:13:51.885026] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x184c00 00:17:04.667 [2024-11-26 21:13:51.885031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x184c00 00:17:04.667 [2024-11-26 21:13:51.885034] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x184c00 00:17:04.667 [2024-11-26 21:13:51.885039] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.667 [2024-11-26 21:13:51.885042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:04.667 [2024-11-26 21:13:51.885046] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x184c00 00:17:04.667 [2024-11-26 21:13:51.885063] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.667 [2024-11-26 21:13:51.885067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:04.667 [2024-11-26 21:13:51.885074] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x184c00 00:17:04.667 [2024-11-26 21:13:51.885079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x184c00 00:17:04.667 [2024-11-26 21:13:51.885083] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x184c00 00:17:04.667 [2024-11-26 21:13:51.885106] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.667 [2024-11-26 21:13:51.885110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:04.667 [2024-11-26 21:13:51.885117] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x184c00 00:17:04.667 ===================================================== 00:17:04.667 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:04.667 ===================================================== 00:17:04.667 Controller Capabilities/Features 00:17:04.667 ================================ 00:17:04.667 Vendor ID: 0000 00:17:04.667 Subsystem Vendor ID: 0000 00:17:04.667 Serial Number: .................... 00:17:04.667 Model Number: ........................................ 00:17:04.667 Firmware Version: 25.01 00:17:04.667 Recommended Arb Burst: 0 00:17:04.667 IEEE OUI Identifier: 00 00 00 00:17:04.667 Multi-path I/O 00:17:04.667 May have multiple subsystem ports: No 00:17:04.667 May have multiple controllers: No 00:17:04.667 Associated with SR-IOV VF: No 00:17:04.667 Max Data Transfer Size: 131072 00:17:04.667 Max Number of Namespaces: 0 00:17:04.667 Max Number of I/O Queues: 1024 00:17:04.667 NVMe Specification Version (VS): 1.3 00:17:04.667 NVMe Specification Version (Identify): 1.3 00:17:04.667 Maximum Queue Entries: 128 00:17:04.667 Contiguous Queues Required: Yes 00:17:04.667 Arbitration Mechanisms Supported 00:17:04.667 Weighted Round Robin: Not Supported 00:17:04.667 Vendor Specific: Not Supported 00:17:04.667 Reset Timeout: 15000 ms 00:17:04.667 Doorbell Stride: 4 bytes 00:17:04.667 NVM Subsystem Reset: Not Supported 00:17:04.667 Command Sets Supported 00:17:04.667 NVM Command Set: Supported 00:17:04.667 Boot Partition: Not Supported 00:17:04.667 Memory Page Size Minimum: 4096 bytes 00:17:04.667 Memory Page Size Maximum: 4096 bytes 00:17:04.667 Persistent Memory Region: Not Supported 00:17:04.667 Optional Asynchronous Events Supported 00:17:04.667 Namespace Attribute Notices: Not Supported 00:17:04.667 Firmware Activation Notices: Not Supported 00:17:04.667 ANA Change Notices: Not Supported 00:17:04.667 PLE Aggregate Log Change Notices: Not Supported 00:17:04.667 LBA Status Info Alert Notices: Not Supported 00:17:04.667 EGE Aggregate Log Change Notices: Not Supported 00:17:04.667 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.667 Zone Descriptor Change Notices: Not Supported 00:17:04.667 Discovery Log Change Notices: Supported 00:17:04.667 Controller Attributes 00:17:04.667 128-bit Host Identifier: Not Supported 00:17:04.667 Non-Operational Permissive Mode: Not Supported 00:17:04.667 NVM Sets: Not Supported 00:17:04.667 Read Recovery Levels: Not Supported 00:17:04.667 Endurance Groups: Not Supported 00:17:04.667 Predictable Latency Mode: Not Supported 00:17:04.667 Traffic Based Keep ALive: Not Supported 00:17:04.667 Namespace Granularity: Not Supported 00:17:04.667 SQ Associations: Not Supported 00:17:04.667 UUID List: Not Supported 00:17:04.667 Multi-Domain Subsystem: Not Supported 00:17:04.667 Fixed Capacity Management: Not Supported 00:17:04.667 Variable Capacity Management: Not Supported 00:17:04.667 Delete Endurance Group: Not Supported 00:17:04.667 Delete NVM Set: Not Supported 00:17:04.667 Extended LBA Formats Supported: Not Supported 00:17:04.667 Flexible Data Placement Supported: Not Supported 00:17:04.667 00:17:04.667 Controller Memory Buffer Support 00:17:04.667 ================================ 00:17:04.667 Supported: No 00:17:04.667 00:17:04.667 Persistent Memory Region Support 00:17:04.667 ================================ 00:17:04.667 Supported: No 00:17:04.667 00:17:04.667 Admin Command Set Attributes 00:17:04.667 ============================ 00:17:04.667 Security Send/Receive: Not Supported 00:17:04.667 Format NVM: Not Supported 00:17:04.667 Firmware Activate/Download: Not Supported 00:17:04.667 Namespace Management: Not Supported 00:17:04.667 Device Self-Test: Not Supported 00:17:04.667 Directives: Not Supported 00:17:04.667 NVMe-MI: Not Supported 00:17:04.667 Virtualization Management: Not Supported 00:17:04.667 Doorbell Buffer Config: Not Supported 00:17:04.667 Get LBA Status Capability: Not Supported 00:17:04.667 Command & Feature Lockdown Capability: Not Supported 00:17:04.667 Abort Command Limit: 1 00:17:04.667 Async Event Request Limit: 4 00:17:04.667 Number of Firmware Slots: N/A 00:17:04.667 Firmware Slot 1 Read-Only: N/A 00:17:04.667 Firmware Activation Without Reset: N/A 00:17:04.667 Multiple Update Detection Support: N/A 00:17:04.667 Firmware Update Granularity: No Information Provided 00:17:04.667 Per-Namespace SMART Log: No 00:17:04.667 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.667 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:04.667 Command Effects Log Page: Not Supported 00:17:04.667 Get Log Page Extended Data: Supported 00:17:04.667 Telemetry Log Pages: Not Supported 00:17:04.667 Persistent Event Log Pages: Not Supported 00:17:04.667 Supported Log Pages Log Page: May Support 00:17:04.667 Commands Supported & Effects Log Page: Not Supported 00:17:04.667 Feature Identifiers & Effects Log Page:May Support 00:17:04.667 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.667 Data Area 4 for Telemetry Log: Not Supported 00:17:04.667 Error Log Page Entries Supported: 128 00:17:04.667 Keep Alive: Not Supported 00:17:04.667 00:17:04.667 NVM Command Set Attributes 00:17:04.667 ========================== 00:17:04.667 Submission Queue Entry Size 00:17:04.667 Max: 1 00:17:04.667 Min: 1 00:17:04.667 Completion Queue Entry Size 00:17:04.667 Max: 1 00:17:04.667 Min: 1 00:17:04.667 Number of Namespaces: 0 00:17:04.667 Compare Command: Not Supported 00:17:04.667 Write Uncorrectable Command: Not Supported 00:17:04.667 Dataset Management Command: Not Supported 00:17:04.667 Write Zeroes Command: Not Supported 00:17:04.667 Set Features Save Field: Not Supported 00:17:04.667 Reservations: Not Supported 00:17:04.667 Timestamp: Not Supported 00:17:04.667 Copy: Not Supported 00:17:04.667 Volatile Write Cache: Not Present 00:17:04.667 Atomic Write Unit (Normal): 1 00:17:04.667 Atomic Write Unit (PFail): 1 00:17:04.667 Atomic Compare & Write Unit: 1 00:17:04.667 Fused Compare & Write: Supported 00:17:04.667 Scatter-Gather List 00:17:04.667 SGL Command Set: Supported 00:17:04.667 SGL Keyed: Supported 00:17:04.667 SGL Bit Bucket Descriptor: Not Supported 00:17:04.667 SGL Metadata Pointer: Not Supported 00:17:04.667 Oversized SGL: Not Supported 00:17:04.667 SGL Metadata Address: Not Supported 00:17:04.667 SGL Offset: Supported 00:17:04.667 Transport SGL Data Block: Not Supported 00:17:04.667 Replay Protected Memory Block: Not Supported 00:17:04.667 00:17:04.667 Firmware Slot Information 00:17:04.667 ========================= 00:17:04.667 Active slot: 0 00:17:04.667 00:17:04.667 00:17:04.667 Error Log 00:17:04.668 ========= 00:17:04.668 00:17:04.668 Active Namespaces 00:17:04.668 ================= 00:17:04.668 Discovery Log Page 00:17:04.668 ================== 00:17:04.668 Generation Counter: 2 00:17:04.668 Number of Records: 2 00:17:04.668 Record Format: 0 00:17:04.668 00:17:04.668 Discovery Log Entry 0 00:17:04.668 ---------------------- 00:17:04.668 Transport Type: 1 (RDMA) 00:17:04.668 Address Family: 1 (IPv4) 00:17:04.668 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:04.668 Entry Flags: 00:17:04.668 Duplicate Returned Information: 1 00:17:04.668 Explicit Persistent Connection Support for Discovery: 1 00:17:04.668 Transport Requirements: 00:17:04.668 Secure Channel: Not Required 00:17:04.668 Port ID: 0 (0x0000) 00:17:04.668 Controller ID: 65535 (0xffff) 00:17:04.668 Admin Max SQ Size: 128 00:17:04.668 Transport Service Identifier: 4420 00:17:04.668 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:04.668 Transport Address: 192.168.100.8 00:17:04.668 Transport Specific Address Subtype - RDMA 00:17:04.668 RDMA QP Service Type: 1 (Reliable Connected) 00:17:04.668 RDMA Provider Type: 1 (No provider specified) 00:17:04.668 RDMA CM Service: 1 (RDMA_CM) 00:17:04.668 Discovery Log Entry 1 00:17:04.668 ---------------------- 00:17:04.668 Transport Type: 1 (RDMA) 00:17:04.668 Address Family: 1 (IPv4) 00:17:04.668 Subsystem Type: 2 (NVM Subsystem) 00:17:04.668 Entry Flags: 00:17:04.668 Duplicate Returned Information: 0 00:17:04.668 Explicit Persistent Connection Support for Discovery: 0 00:17:04.668 Transport Requirements: 00:17:04.668 Secure Channel: Not Required 00:17:04.668 Port ID: 0 (0x0000) 00:17:04.668 Controller ID: 65535 (0xffff) 00:17:04.668 Admin Max SQ Size: [2024-11-26 21:13:51.885174] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:04.668 [2024-11-26 21:13:51.885181] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55241 doesn't match qid 00:17:04.668 [2024-11-26 21:13:51.885192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32733 cdw0:8c5aca50 sqhd:4a40 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885196] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55241 doesn't match qid 00:17:04.668 [2024-11-26 21:13:51.885202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32733 cdw0:8c5aca50 sqhd:4a40 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885207] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55241 doesn't match qid 00:17:04.668 [2024-11-26 21:13:51.885213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32733 cdw0:8c5aca50 sqhd:4a40 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885217] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 55241 doesn't match qid 00:17:04.668 [2024-11-26 21:13:51.885222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32733 cdw0:8c5aca50 sqhd:4a40 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885231] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885251] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.668 [2024-11-26 21:13:51.885261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885268] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885277] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885298] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.668 [2024-11-26 21:13:51.885302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885306] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:04.668 [2024-11-26 21:13:51.885310] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:04.668 [2024-11-26 21:13:51.885313] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885320] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885344] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.668 [2024-11-26 21:13:51.885349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885353] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885359] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885381] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.668 [2024-11-26 21:13:51.885385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885389] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885395] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885419] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.668 [2024-11-26 21:13:51.885423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885427] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885433] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885456] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.668 [2024-11-26 21:13:51.885460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885464] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885470] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885493] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.668 [2024-11-26 21:13:51.885496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885501] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885507] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885535] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.668 [2024-11-26 21:13:51.885539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885543] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885549] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885572] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.668 [2024-11-26 21:13:51.885576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885580] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885586] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885610] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.668 [2024-11-26 21:13:51.885613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885617] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885623] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885649] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.668 [2024-11-26 21:13:51.885653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:04.668 [2024-11-26 21:13:51.885657] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885663] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.668 [2024-11-26 21:13:51.885668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.668 [2024-11-26 21:13:51.885691] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.885695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.885699] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885706] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.885731] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.885735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.885739] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885745] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.885766] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.885770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.885774] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885780] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.885800] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.885803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.885807] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885814] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.885833] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.885837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.885841] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885847] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.885873] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.885876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.885881] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885887] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.885909] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.885913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.885917] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885924] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.885943] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.885947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.885951] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885957] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.885981] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.885985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.885989] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.885995] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.886017] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.886021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.886026] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886032] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.886059] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.886063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.886067] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886073] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.886097] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.886101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.886105] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886111] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.886135] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.886139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.886143] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886149] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.886176] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.886180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.886184] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886190] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.886211] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.886215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.886219] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886225] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.886244] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.886248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.886256] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886262] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.886280] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.886284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:04.669 [2024-11-26 21:13:51.886288] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886297] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.669 [2024-11-26 21:13:51.886303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.669 [2024-11-26 21:13:51.886316] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.669 [2024-11-26 21:13:51.886319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:04.670 [2024-11-26 21:13:51.886324] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886330] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.670 [2024-11-26 21:13:51.886352] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.670 [2024-11-26 21:13:51.886356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:04.670 [2024-11-26 21:13:51.886360] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886366] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.670 [2024-11-26 21:13:51.886386] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.670 [2024-11-26 21:13:51.886390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:17:04.670 [2024-11-26 21:13:51.886394] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886400] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.670 [2024-11-26 21:13:51.886424] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.670 [2024-11-26 21:13:51.886428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:17:04.670 [2024-11-26 21:13:51.886432] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886438] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.670 [2024-11-26 21:13:51.886462] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.670 [2024-11-26 21:13:51.886466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:17:04.670 [2024-11-26 21:13:51.886470] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886476] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.670 [2024-11-26 21:13:51.886501] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.670 [2024-11-26 21:13:51.886505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:17:04.670 [2024-11-26 21:13:51.886509] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886517] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.670 [2024-11-26 21:13:51.886538] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.670 [2024-11-26 21:13:51.886542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:04.670 [2024-11-26 21:13:51.886546] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886552] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.670 [2024-11-26 21:13:51.886573] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.670 [2024-11-26 21:13:51.886577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:04.670 [2024-11-26 21:13:51.886581] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886587] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.670 [2024-11-26 21:13:51.886611] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.670 [2024-11-26 21:13:51.886615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:04.670 [2024-11-26 21:13:51.886619] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886625] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.670 [2024-11-26 21:13:51.886645] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.670 [2024-11-26 21:13:51.886649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:04.670 [2024-11-26 21:13:51.886653] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886659] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.670 [2024-11-26 21:13:51.886664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.670 [2024-11-26 21:13:51.886684] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.670 [2024-11-26 21:13:51.886688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.886692] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886698] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.886725] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.886729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.886734] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886741] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.886765] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.886769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.886773] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886779] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.886801] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.886805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.886809] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886815] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.886841] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.886845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.886849] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886855] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.886877] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.886881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.886885] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886892] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.886917] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.886921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.886925] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886931] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.886955] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.886959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.886964] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886970] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.886976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.886990] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.886994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.886998] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887004] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.887024] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.887027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.887031] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887038] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.887062] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.887066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.887070] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887076] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.887101] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.887105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.887109] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887115] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.887136] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.887140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.887144] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887150] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.887176] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.887181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.887185] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887191] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.887215] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.887219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.887223] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887229] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.887255] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.887259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.887263] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887270] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.887293] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.887297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.887301] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887307] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.671 [2024-11-26 21:13:51.887331] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.671 [2024-11-26 21:13:51.887335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:04.671 [2024-11-26 21:13:51.887339] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887345] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.671 [2024-11-26 21:13:51.887351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887369] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887377] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887383] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887409] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887418] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887425] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887447] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887455] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887461] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887480] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887488] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887495] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887523] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887531] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887537] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887561] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887569] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887575] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887594] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887602] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887609] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887634] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887642] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887648] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887668] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887675] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887682] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887706] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887713] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887720] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887745] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887753] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887759] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887786] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887794] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887800] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887827] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887835] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887841] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887864] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887872] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887878] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887897] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887905] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887911] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887935] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887943] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887949] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.887972] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.887975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.887979] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887986] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.887991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.672 [2024-11-26 21:13:51.888007] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.672 [2024-11-26 21:13:51.888011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:04.672 [2024-11-26 21:13:51.888015] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x184c00 00:17:04.672 [2024-11-26 21:13:51.888021] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.673 [2024-11-26 21:13:51.888043] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.673 [2024-11-26 21:13:51.888047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:04.673 [2024-11-26 21:13:51.888051] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888057] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.673 [2024-11-26 21:13:51.888080] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.673 [2024-11-26 21:13:51.888084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:04.673 [2024-11-26 21:13:51.888088] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888094] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.673 [2024-11-26 21:13:51.888116] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.673 [2024-11-26 21:13:51.888120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:04.673 [2024-11-26 21:13:51.888124] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888130] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.673 [2024-11-26 21:13:51.888156] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.673 [2024-11-26 21:13:51.888160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:04.673 [2024-11-26 21:13:51.888164] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888170] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.673 [2024-11-26 21:13:51.888197] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.673 [2024-11-26 21:13:51.888201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:04.673 [2024-11-26 21:13:51.888205] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888211] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.673 [2024-11-26 21:13:51.888238] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.673 [2024-11-26 21:13:51.888241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:04.673 [2024-11-26 21:13:51.888245] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.888252] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.892264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.673 [2024-11-26 21:13:51.892286] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.673 [2024-11-26 21:13:51.892290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0000 p:0 m:0 dnr:0 00:17:04.673 [2024-11-26 21:13:51.892294] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184c00 00:17:04.673 [2024-11-26 21:13:51.892299] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:17:04.673 128 00:17:04.673 Transport Service Identifier: 4420 00:17:04.673 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:04.673 Transport Address: 192.168.100.8 00:17:04.673 Transport Specific Address Subtype - RDMA 00:17:04.673 RDMA QP Service Type: 1 (Reliable Connected) 00:17:04.673 RDMA Provider Type: 1 (No provider specified) 00:17:04.673 RDMA CM Service: 1 (RDMA_CM) 00:17:04.673 21:13:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:04.673 [2024-11-26 21:13:51.961189] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:17:04.674 [2024-11-26 21:13:51.961224] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928502 ] 00:17:04.674 [2024-11-26 21:13:52.015440] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:04.674 [2024-11-26 21:13:52.015505] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:17:04.674 [2024-11-26 21:13:52.015514] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:17:04.674 [2024-11-26 21:13:52.015518] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:17:04.674 [2024-11-26 21:13:52.015540] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:04.674 [2024-11-26 21:13:52.026784] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:17:04.674 [2024-11-26 21:13:52.036455] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:04.674 [2024-11-26 21:13:52.036466] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:17:04.674 [2024-11-26 21:13:52.036471] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036475] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036479] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036483] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036487] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036491] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036495] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036498] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036502] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036506] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036510] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036513] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036517] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036524] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036527] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036531] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036535] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036539] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036543] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036546] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036550] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036554] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036558] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036561] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036565] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036569] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036573] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036576] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036580] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036584] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036588] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036591] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:17:04.674 [2024-11-26 21:13:52.036594] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:17:04.674 [2024-11-26 21:13:52.036597] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:17:04.674 [2024-11-26 21:13:52.036614] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.036623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x184c00 00:17:04.674 [2024-11-26 21:13:52.042259] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.674 [2024-11-26 21:13:52.042267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:04.674 [2024-11-26 21:13:52.042272] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.042277] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:04.674 [2024-11-26 21:13:52.042282] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:04.674 [2024-11-26 21:13:52.042286] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:04.674 [2024-11-26 21:13:52.042295] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.042301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.674 [2024-11-26 21:13:52.042316] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.674 [2024-11-26 21:13:52.042321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:17:04.674 [2024-11-26 21:13:52.042325] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:04.674 [2024-11-26 21:13:52.042329] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.042333] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:04.674 [2024-11-26 21:13:52.042339] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.042344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.674 [2024-11-26 21:13:52.042360] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.674 [2024-11-26 21:13:52.042364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:17:04.674 [2024-11-26 21:13:52.042368] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:04.674 [2024-11-26 21:13:52.042372] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.042376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:04.674 [2024-11-26 21:13:52.042382] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.042387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.674 [2024-11-26 21:13:52.042400] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.674 [2024-11-26 21:13:52.042404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:04.674 [2024-11-26 21:13:52.042408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:04.674 [2024-11-26 21:13:52.042412] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.042418] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.042423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.674 [2024-11-26 21:13:52.042441] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.674 [2024-11-26 21:13:52.042445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:04.674 [2024-11-26 21:13:52.042448] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:04.674 [2024-11-26 21:13:52.042452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:04.674 [2024-11-26 21:13:52.042456] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.042460] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:04.674 [2024-11-26 21:13:52.042566] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:04.674 [2024-11-26 21:13:52.042570] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:04.674 [2024-11-26 21:13:52.042577] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.674 [2024-11-26 21:13:52.042583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.675 [2024-11-26 21:13:52.042603] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.675 [2024-11-26 21:13:52.042607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:04.675 [2024-11-26 21:13:52.042611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:04.675 [2024-11-26 21:13:52.042615] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042621] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.675 [2024-11-26 21:13:52.042644] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.675 [2024-11-26 21:13:52.042648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:04.675 [2024-11-26 21:13:52.042651] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:04.675 [2024-11-26 21:13:52.042655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.042659] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042664] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:04.675 [2024-11-26 21:13:52.042669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.042676] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184c00 00:17:04.675 [2024-11-26 21:13:52.042722] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.675 [2024-11-26 21:13:52.042726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:04.675 [2024-11-26 21:13:52.042732] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:04.675 [2024-11-26 21:13:52.042735] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:04.675 [2024-11-26 21:13:52.042739] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:04.675 [2024-11-26 21:13:52.042744] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:04.675 [2024-11-26 21:13:52.042747] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:04.675 [2024-11-26 21:13:52.042751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.042755] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.042766] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.675 [2024-11-26 21:13:52.042791] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.675 [2024-11-26 21:13:52.042795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:04.675 [2024-11-26 21:13:52.042800] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.675 [2024-11-26 21:13:52.042809] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.675 [2024-11-26 21:13:52.042819] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.675 [2024-11-26 21:13:52.042828] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.675 [2024-11-26 21:13:52.042836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.042840] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.042850] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.675 [2024-11-26 21:13:52.042873] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.675 [2024-11-26 21:13:52.042877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:17:04.675 [2024-11-26 21:13:52.042881] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:04.675 [2024-11-26 21:13:52.042885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.042888] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.042898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.042903] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.675 [2024-11-26 21:13:52.042925] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.675 [2024-11-26 21:13:52.042930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:17:04.675 [2024-11-26 21:13:52.042976] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.042980] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042986] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.042992] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.675 [2024-11-26 21:13:52.042997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184c00 00:17:04.675 [2024-11-26 21:13:52.043019] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.675 [2024-11-26 21:13:52.043023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:04.675 [2024-11-26 21:13:52.043032] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:04.675 [2024-11-26 21:13:52.043041] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:04.675 [2024-11-26 21:13:52.043045] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:04.676 [2024-11-26 21:13:52.043057] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184c00 00:17:04.676 [2024-11-26 21:13:52.043093] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.676 [2024-11-26 21:13:52.043096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:04.676 [2024-11-26 21:13:52.043107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:04.676 [2024-11-26 21:13:52.043111] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043117] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:04.676 [2024-11-26 21:13:52.043123] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184c00 00:17:04.676 [2024-11-26 21:13:52.043156] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.676 [2024-11-26 21:13:52.043159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:04.676 [2024-11-26 21:13:52.043165] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:04.676 [2024-11-26 21:13:52.043169] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:04.676 [2024-11-26 21:13:52.043180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:04.676 [2024-11-26 21:13:52.043185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:04.676 [2024-11-26 21:13:52.043189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:04.676 [2024-11-26 21:13:52.043194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:04.676 [2024-11-26 21:13:52.043197] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:04.676 [2024-11-26 21:13:52.043201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:04.676 [2024-11-26 21:13:52.043205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:04.676 [2024-11-26 21:13:52.043216] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.676 [2024-11-26 21:13:52.043226] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.676 [2024-11-26 21:13:52.043243] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.676 [2024-11-26 21:13:52.043247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:04.676 [2024-11-26 21:13:52.043258] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043265] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.676 [2024-11-26 21:13:52.043275] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.676 [2024-11-26 21:13:52.043279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:04.676 [2024-11-26 21:13:52.043283] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043287] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.676 [2024-11-26 21:13:52.043290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:04.676 [2024-11-26 21:13:52.043294] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043300] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.676 [2024-11-26 21:13:52.043326] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.676 [2024-11-26 21:13:52.043330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:04.676 [2024-11-26 21:13:52.043334] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043340] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.676 [2024-11-26 21:13:52.043365] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.676 [2024-11-26 21:13:52.043368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:17:04.676 [2024-11-26 21:13:52.043372] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043382] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x184c00 00:17:04.676 [2024-11-26 21:13:52.043394] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x184c00 00:17:04.676 [2024-11-26 21:13:52.043405] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x184c00 00:17:04.676 [2024-11-26 21:13:52.043416] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x184c00 00:17:04.676 [2024-11-26 21:13:52.043427] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.676 [2024-11-26 21:13:52.043431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:04.676 [2024-11-26 21:13:52.043439] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043457] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.676 [2024-11-26 21:13:52.043461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:04.676 [2024-11-26 21:13:52.043468] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043472] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.676 [2024-11-26 21:13:52.043475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:04.676 [2024-11-26 21:13:52.043480] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x184c00 00:17:04.676 [2024-11-26 21:13:52.043488] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.676 [2024-11-26 21:13:52.043492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:04.676 [2024-11-26 21:13:52.043498] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x184c00 00:17:04.676 ===================================================== 00:17:04.676 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:04.676 ===================================================== 00:17:04.676 Controller Capabilities/Features 00:17:04.676 ================================ 00:17:04.676 Vendor ID: 8086 00:17:04.676 Subsystem Vendor ID: 8086 00:17:04.676 Serial Number: SPDK00000000000001 00:17:04.676 Model Number: SPDK bdev Controller 00:17:04.676 Firmware Version: 25.01 00:17:04.676 Recommended Arb Burst: 6 00:17:04.676 IEEE OUI Identifier: e4 d2 5c 00:17:04.676 Multi-path I/O 00:17:04.676 May have multiple subsystem ports: Yes 00:17:04.676 May have multiple controllers: Yes 00:17:04.676 Associated with SR-IOV VF: No 00:17:04.676 Max Data Transfer Size: 131072 00:17:04.676 Max Number of Namespaces: 32 00:17:04.676 Max Number of I/O Queues: 127 00:17:04.676 NVMe Specification Version (VS): 1.3 00:17:04.676 NVMe Specification Version (Identify): 1.3 00:17:04.676 Maximum Queue Entries: 128 00:17:04.676 Contiguous Queues Required: Yes 00:17:04.677 Arbitration Mechanisms Supported 00:17:04.677 Weighted Round Robin: Not Supported 00:17:04.677 Vendor Specific: Not Supported 00:17:04.677 Reset Timeout: 15000 ms 00:17:04.677 Doorbell Stride: 4 bytes 00:17:04.677 NVM Subsystem Reset: Not Supported 00:17:04.677 Command Sets Supported 00:17:04.677 NVM Command Set: Supported 00:17:04.677 Boot Partition: Not Supported 00:17:04.677 Memory Page Size Minimum: 4096 bytes 00:17:04.677 Memory Page Size Maximum: 4096 bytes 00:17:04.677 Persistent Memory Region: Not Supported 00:17:04.677 Optional Asynchronous Events Supported 00:17:04.677 Namespace Attribute Notices: Supported 00:17:04.677 Firmware Activation Notices: Not Supported 00:17:04.677 ANA Change Notices: Not Supported 00:17:04.677 PLE Aggregate Log Change Notices: Not Supported 00:17:04.677 LBA Status Info Alert Notices: Not Supported 00:17:04.677 EGE Aggregate Log Change Notices: Not Supported 00:17:04.677 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.677 Zone Descriptor Change Notices: Not Supported 00:17:04.677 Discovery Log Change Notices: Not Supported 00:17:04.677 Controller Attributes 00:17:04.677 128-bit Host Identifier: Supported 00:17:04.677 Non-Operational Permissive Mode: Not Supported 00:17:04.677 NVM Sets: Not Supported 00:17:04.677 Read Recovery Levels: Not Supported 00:17:04.677 Endurance Groups: Not Supported 00:17:04.677 Predictable Latency Mode: Not Supported 00:17:04.677 Traffic Based Keep ALive: Not Supported 00:17:04.677 Namespace Granularity: Not Supported 00:17:04.677 SQ Associations: Not Supported 00:17:04.677 UUID List: Not Supported 00:17:04.677 Multi-Domain Subsystem: Not Supported 00:17:04.677 Fixed Capacity Management: Not Supported 00:17:04.677 Variable Capacity Management: Not Supported 00:17:04.677 Delete Endurance Group: Not Supported 00:17:04.677 Delete NVM Set: Not Supported 00:17:04.677 Extended LBA Formats Supported: Not Supported 00:17:04.677 Flexible Data Placement Supported: Not Supported 00:17:04.677 00:17:04.677 Controller Memory Buffer Support 00:17:04.677 ================================ 00:17:04.677 Supported: No 00:17:04.677 00:17:04.677 Persistent Memory Region Support 00:17:04.677 ================================ 00:17:04.677 Supported: No 00:17:04.677 00:17:04.677 Admin Command Set Attributes 00:17:04.677 ============================ 00:17:04.677 Security Send/Receive: Not Supported 00:17:04.677 Format NVM: Not Supported 00:17:04.677 Firmware Activate/Download: Not Supported 00:17:04.677 Namespace Management: Not Supported 00:17:04.677 Device Self-Test: Not Supported 00:17:04.677 Directives: Not Supported 00:17:04.677 NVMe-MI: Not Supported 00:17:04.677 Virtualization Management: Not Supported 00:17:04.677 Doorbell Buffer Config: Not Supported 00:17:04.677 Get LBA Status Capability: Not Supported 00:17:04.677 Command & Feature Lockdown Capability: Not Supported 00:17:04.677 Abort Command Limit: 4 00:17:04.677 Async Event Request Limit: 4 00:17:04.677 Number of Firmware Slots: N/A 00:17:04.677 Firmware Slot 1 Read-Only: N/A 00:17:04.677 Firmware Activation Without Reset: N/A 00:17:04.677 Multiple Update Detection Support: N/A 00:17:04.677 Firmware Update Granularity: No Information Provided 00:17:04.677 Per-Namespace SMART Log: No 00:17:04.677 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.677 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:04.677 Command Effects Log Page: Supported 00:17:04.677 Get Log Page Extended Data: Supported 00:17:04.677 Telemetry Log Pages: Not Supported 00:17:04.677 Persistent Event Log Pages: Not Supported 00:17:04.677 Supported Log Pages Log Page: May Support 00:17:04.677 Commands Supported & Effects Log Page: Not Supported 00:17:04.677 Feature Identifiers & Effects Log Page:May Support 00:17:04.677 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.677 Data Area 4 for Telemetry Log: Not Supported 00:17:04.677 Error Log Page Entries Supported: 128 00:17:04.677 Keep Alive: Supported 00:17:04.677 Keep Alive Granularity: 10000 ms 00:17:04.677 00:17:04.677 NVM Command Set Attributes 00:17:04.677 ========================== 00:17:04.677 Submission Queue Entry Size 00:17:04.677 Max: 64 00:17:04.677 Min: 64 00:17:04.677 Completion Queue Entry Size 00:17:04.677 Max: 16 00:17:04.677 Min: 16 00:17:04.677 Number of Namespaces: 32 00:17:04.677 Compare Command: Supported 00:17:04.677 Write Uncorrectable Command: Not Supported 00:17:04.677 Dataset Management Command: Supported 00:17:04.677 Write Zeroes Command: Supported 00:17:04.677 Set Features Save Field: Not Supported 00:17:04.677 Reservations: Supported 00:17:04.677 Timestamp: Not Supported 00:17:04.677 Copy: Supported 00:17:04.677 Volatile Write Cache: Present 00:17:04.677 Atomic Write Unit (Normal): 1 00:17:04.677 Atomic Write Unit (PFail): 1 00:17:04.677 Atomic Compare & Write Unit: 1 00:17:04.677 Fused Compare & Write: Supported 00:17:04.677 Scatter-Gather List 00:17:04.677 SGL Command Set: Supported 00:17:04.677 SGL Keyed: Supported 00:17:04.677 SGL Bit Bucket Descriptor: Not Supported 00:17:04.677 SGL Metadata Pointer: Not Supported 00:17:04.677 Oversized SGL: Not Supported 00:17:04.677 SGL Metadata Address: Not Supported 00:17:04.677 SGL Offset: Supported 00:17:04.677 Transport SGL Data Block: Not Supported 00:17:04.677 Replay Protected Memory Block: Not Supported 00:17:04.677 00:17:04.677 Firmware Slot Information 00:17:04.677 ========================= 00:17:04.677 Active slot: 1 00:17:04.677 Slot 1 Firmware Revision: 25.01 00:17:04.677 00:17:04.677 00:17:04.677 Commands Supported and Effects 00:17:04.677 ============================== 00:17:04.677 Admin Commands 00:17:04.677 -------------- 00:17:04.677 Get Log Page (02h): Supported 00:17:04.677 Identify (06h): Supported 00:17:04.677 Abort (08h): Supported 00:17:04.677 Set Features (09h): Supported 00:17:04.677 Get Features (0Ah): Supported 00:17:04.677 Asynchronous Event Request (0Ch): Supported 00:17:04.677 Keep Alive (18h): Supported 00:17:04.677 I/O Commands 00:17:04.677 ------------ 00:17:04.677 Flush (00h): Supported LBA-Change 00:17:04.677 Write (01h): Supported LBA-Change 00:17:04.677 Read (02h): Supported 00:17:04.677 Compare (05h): Supported 00:17:04.677 Write Zeroes (08h): Supported LBA-Change 00:17:04.677 Dataset Management (09h): Supported LBA-Change 00:17:04.677 Copy (19h): Supported LBA-Change 00:17:04.677 00:17:04.677 Error Log 00:17:04.677 ========= 00:17:04.677 00:17:04.677 Arbitration 00:17:04.677 =========== 00:17:04.677 Arbitration Burst: 1 00:17:04.677 00:17:04.677 Power Management 00:17:04.677 ================ 00:17:04.677 Number of Power States: 1 00:17:04.677 Current Power State: Power State #0 00:17:04.677 Power State #0: 00:17:04.677 Max Power: 0.00 W 00:17:04.677 Non-Operational State: Operational 00:17:04.677 Entry Latency: Not Reported 00:17:04.677 Exit Latency: Not Reported 00:17:04.677 Relative Read Throughput: 0 00:17:04.677 Relative Read Latency: 0 00:17:04.677 Relative Write Throughput: 0 00:17:04.677 Relative Write Latency: 0 00:17:04.677 Idle Power: Not Reported 00:17:04.677 Active Power: Not Reported 00:17:04.677 Non-Operational Permissive Mode: Not Supported 00:17:04.677 00:17:04.677 Health Information 00:17:04.677 ================== 00:17:04.677 Critical Warnings: 00:17:04.677 Available Spare Space: OK 00:17:04.677 Temperature: OK 00:17:04.677 Device Reliability: OK 00:17:04.677 Read Only: No 00:17:04.677 Volatile Memory Backup: OK 00:17:04.677 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:04.677 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:04.677 Available Spare: 0% 00:17:04.677 Available Spare Threshold: 0% 00:17:04.677 Life Percentage [2024-11-26 21:13:52.043564] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x184c00 00:17:04.677 [2024-11-26 21:13:52.043570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.677 [2024-11-26 21:13:52.043587] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.677 [2024-11-26 21:13:52.043591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:04.677 [2024-11-26 21:13:52.043597] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043618] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:04.678 [2024-11-26 21:13:52.043624] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 913 doesn't match qid 00:17:04.678 [2024-11-26 21:13:52.043635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32626 cdw0:96bd2d90 sqhd:8a40 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043640] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 913 doesn't match qid 00:17:04.678 [2024-11-26 21:13:52.043645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32626 cdw0:96bd2d90 sqhd:8a40 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043649] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 913 doesn't match qid 00:17:04.678 [2024-11-26 21:13:52.043655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32626 cdw0:96bd2d90 sqhd:8a40 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043659] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 913 doesn't match qid 00:17:04.678 [2024-11-26 21:13:52.043664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32626 cdw0:96bd2d90 sqhd:8a40 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043670] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.043693] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.043697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043703] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.043713] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043735] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.043739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043743] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:04.678 [2024-11-26 21:13:52.043747] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:04.678 [2024-11-26 21:13:52.043751] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043757] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.043785] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.043789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043793] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043800] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.043828] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.043833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043837] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043843] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.043868] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.043872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043876] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043883] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.043910] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.043914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043918] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043924] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.043947] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.043950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043955] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043961] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.043983] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.043987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.043991] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.043997] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.044016] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.044020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.044024] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044031] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.044058] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.044062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.044066] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044072] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.044090] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.044094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.044098] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044105] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.044129] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.044133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.044137] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044143] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.044163] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.044166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.044170] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044177] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.044203] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.044207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.044211] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044218] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.678 [2024-11-26 21:13:52.044243] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.678 [2024-11-26 21:13:52.044247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:04.678 [2024-11-26 21:13:52.044251] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044263] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.678 [2024-11-26 21:13:52.044270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044287] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044295] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044302] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044324] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044331] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044338] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044363] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044371] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044378] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044400] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044407] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044414] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044433] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044441] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044447] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044470] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044479] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044486] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044507] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044515] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044521] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044541] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044549] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044556] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044574] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044582] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044588] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044617] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044625] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044631] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044654] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044662] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044668] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044694] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044702] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044710] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044730] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044738] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044744] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044765] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044773] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044779] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044806] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044813] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044820] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044844] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044852] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044858] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044885] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044893] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044899] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.679 [2024-11-26 21:13:52.044924] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.679 [2024-11-26 21:13:52.044928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:04.679 [2024-11-26 21:13:52.044933] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044939] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.679 [2024-11-26 21:13:52.044945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.044960] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.044964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.044968] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.044974] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.044979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.044998] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045006] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045012] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045040] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045048] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045054] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045080] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045087] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045094] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045116] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045124] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045130] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045155] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045165] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045171] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045193] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045201] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045207] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045231] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045239] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045245] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045273] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045281] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045287] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045308] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045316] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045322] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045348] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045355] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045361] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045383] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045392] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045398] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045421] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:17:04.680 [2024-11-26 21:13:52.045428] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045435] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.680 [2024-11-26 21:13:52.045440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.680 [2024-11-26 21:13:52.045458] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.680 [2024-11-26 21:13:52.045462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045466] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045472] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045498] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045506] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045512] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045530] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045538] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045544] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045563] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045571] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045577] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045597] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045606] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045612] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045636] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045644] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045650] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045670] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045677] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045684] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045711] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045719] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045726] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045747] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045755] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045761] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045783] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045791] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045797] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045825] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045833] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045840] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045860] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045868] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045874] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045897] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045906] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045912] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045933] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045941] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045947] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.045971] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.045975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.045979] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045985] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.045990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.046008] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.046012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.046016] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.046022] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.046027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.046050] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.046054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.046058] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.046064] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.046070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.046091] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.681 [2024-11-26 21:13:52.046095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:17:04.681 [2024-11-26 21:13:52.046099] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.046105] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.681 [2024-11-26 21:13:52.046110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.681 [2024-11-26 21:13:52.046130] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.682 [2024-11-26 21:13:52.046134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:17:04.682 [2024-11-26 21:13:52.046138] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x184c00 00:17:04.682 [2024-11-26 21:13:52.046144] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.682 [2024-11-26 21:13:52.046150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.682 [2024-11-26 21:13:52.046166] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.682 [2024-11-26 21:13:52.046170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:17:04.682 [2024-11-26 21:13:52.046174] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x184c00 00:17:04.682 [2024-11-26 21:13:52.046180] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.682 [2024-11-26 21:13:52.046186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.682 [2024-11-26 21:13:52.046200] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.682 [2024-11-26 21:13:52.046204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:17:04.682 [2024-11-26 21:13:52.046208] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x184c00 00:17:04.682 [2024-11-26 21:13:52.046214] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.682 [2024-11-26 21:13:52.046219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.682 [2024-11-26 21:13:52.046233] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.682 [2024-11-26 21:13:52.046237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:17:04.682 [2024-11-26 21:13:52.046241] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x184c00 00:17:04.682 [2024-11-26 21:13:52.046247] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x184c00 00:17:04.682 [2024-11-26 21:13:52.050262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:17:04.682 [2024-11-26 21:13:52.050277] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:17:04.682 [2024-11-26 21:13:52.050282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:001d p:0 m:0 dnr:0 00:17:04.682 [2024-11-26 21:13:52.050286] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x184c00 00:17:04.682 [2024-11-26 21:13:52.050291] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 6 milliseconds 00:17:04.682 Used: 0% 00:17:04.682 Data Units Read: 0 00:17:04.682 Data Units Written: 0 00:17:04.682 Host Read Commands: 0 00:17:04.682 Host Write Commands: 0 00:17:04.682 Controller Busy Time: 0 minutes 00:17:04.682 Power Cycles: 0 00:17:04.682 Power On Hours: 0 hours 00:17:04.682 Unsafe Shutdowns: 0 00:17:04.682 Unrecoverable Media Errors: 0 00:17:04.682 Lifetime Error Log Entries: 0 00:17:04.682 Warning Temperature Time: 0 minutes 00:17:04.682 Critical Temperature Time: 0 minutes 00:17:04.682 00:17:04.682 Number of Queues 00:17:04.682 ================ 00:17:04.682 Number of I/O Submission Queues: 127 00:17:04.682 Number of I/O Completion Queues: 127 00:17:04.682 00:17:04.682 Active Namespaces 00:17:04.682 ================= 00:17:04.682 Namespace ID:1 00:17:04.682 Error Recovery Timeout: Unlimited 00:17:04.682 Command Set Identifier: NVM (00h) 00:17:04.682 Deallocate: Supported 00:17:04.682 Deallocated/Unwritten Error: Not Supported 00:17:04.682 Deallocated Read Value: Unknown 00:17:04.682 Deallocate in Write Zeroes: Not Supported 00:17:04.682 Deallocated Guard Field: 0xFFFF 00:17:04.682 Flush: Supported 00:17:04.682 Reservation: Supported 00:17:04.682 Namespace Sharing Capabilities: Multiple Controllers 00:17:04.682 Size (in LBAs): 131072 (0GiB) 00:17:04.682 Capacity (in LBAs): 131072 (0GiB) 00:17:04.682 Utilization (in LBAs): 131072 (0GiB) 00:17:04.682 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:04.682 EUI64: ABCDEF0123456789 00:17:04.682 UUID: 83d71056-6d2c-4b23-83c8-31a3b76776c8 00:17:04.682 Thin Provisioning: Not Supported 00:17:04.682 Per-NS Atomic Units: Yes 00:17:04.682 Atomic Boundary Size (Normal): 0 00:17:04.682 Atomic Boundary Size (PFail): 0 00:17:04.682 Atomic Boundary Offset: 0 00:17:04.682 Maximum Single Source Range Length: 65535 00:17:04.682 Maximum Copy Length: 65535 00:17:04.682 Maximum Source Range Count: 1 00:17:04.682 NGUID/EUI64 Never Reused: No 00:17:04.682 Namespace Write Protected: No 00:17:04.682 Number of LBA Formats: 1 00:17:04.682 Current LBA Format: LBA Format #00 00:17:04.682 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:04.682 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:04.942 rmmod nvme_rdma 00:17:04.942 rmmod nvme_fabrics 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3928384 ']' 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3928384 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3928384 ']' 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3928384 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3928384 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3928384' 00:17:04.942 killing process with pid 3928384 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3928384 00:17:04.942 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3928384 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:05.202 00:17:05.202 real 0m6.688s 00:17:05.202 user 0m5.508s 00:17:05.202 sys 0m4.408s 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:05.202 ************************************ 00:17:05.202 END TEST nvmf_identify 00:17:05.202 ************************************ 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.202 ************************************ 00:17:05.202 START TEST nvmf_perf 00:17:05.202 ************************************ 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:17:05.202 * Looking for test storage... 00:17:05.202 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:05.202 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:05.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.462 --rc genhtml_branch_coverage=1 00:17:05.462 --rc genhtml_function_coverage=1 00:17:05.462 --rc genhtml_legend=1 00:17:05.462 --rc geninfo_all_blocks=1 00:17:05.462 --rc geninfo_unexecuted_blocks=1 00:17:05.462 00:17:05.462 ' 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:05.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.462 --rc genhtml_branch_coverage=1 00:17:05.462 --rc genhtml_function_coverage=1 00:17:05.462 --rc genhtml_legend=1 00:17:05.462 --rc geninfo_all_blocks=1 00:17:05.462 --rc geninfo_unexecuted_blocks=1 00:17:05.462 00:17:05.462 ' 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:05.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.462 --rc genhtml_branch_coverage=1 00:17:05.462 --rc genhtml_function_coverage=1 00:17:05.462 --rc genhtml_legend=1 00:17:05.462 --rc geninfo_all_blocks=1 00:17:05.462 --rc geninfo_unexecuted_blocks=1 00:17:05.462 00:17:05.462 ' 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:05.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.462 --rc genhtml_branch_coverage=1 00:17:05.462 --rc genhtml_function_coverage=1 00:17:05.462 --rc genhtml_legend=1 00:17:05.462 --rc geninfo_all_blocks=1 00:17:05.462 --rc geninfo_unexecuted_blocks=1 00:17:05.462 00:17:05.462 ' 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.462 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.463 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:17:05.463 21:13:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:10.738 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:10.738 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:10.739 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:10.739 Found net devices under 0000:18:00.0: mlx_0_0 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:10.739 Found net devices under 0000:18:00.1: mlx_0_1 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:10.739 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:11.000 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:11.001 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:11.001 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:11.001 altname enp24s0f0np0 00:17:11.001 altname ens785f0np0 00:17:11.001 inet 192.168.100.8/24 scope global mlx_0_0 00:17:11.001 valid_lft forever preferred_lft forever 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:11.001 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:11.001 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:11.001 altname enp24s0f1np1 00:17:11.001 altname ens785f1np1 00:17:11.001 inet 192.168.100.9/24 scope global mlx_0_1 00:17:11.001 valid_lft forever preferred_lft forever 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:11.001 192.168.100.9' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:11.001 192.168.100.9' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:11.001 192.168.100.9' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3931760 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3931760 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3931760 ']' 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.001 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:11.001 [2024-11-26 21:13:58.417739] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:17:11.001 [2024-11-26 21:13:58.417798] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.261 [2024-11-26 21:13:58.481909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:11.261 [2024-11-26 21:13:58.522564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.261 [2024-11-26 21:13:58.522602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.261 [2024-11-26 21:13:58.522609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.261 [2024-11-26 21:13:58.522614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.262 [2024-11-26 21:13:58.522619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.262 [2024-11-26 21:13:58.523987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.262 [2024-11-26 21:13:58.524100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.262 [2024-11-26 21:13:58.524180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:11.262 [2024-11-26 21:13:58.524182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.262 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.262 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:11.262 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:11.262 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:11.262 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:11.262 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.262 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:11.262 21:13:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:14.550 21:14:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:14.550 21:14:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:14.550 21:14:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:17:14.550 21:14:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:14.810 21:14:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:14.810 21:14:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:17:14.810 21:14:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:14.810 21:14:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:17:14.810 21:14:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:17:15.069 [2024-11-26 21:14:02.250981] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:17:15.069 [2024-11-26 21:14:02.269425] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb26280/0xb32a40) succeed. 00:17:15.069 [2024-11-26 21:14:02.277782] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb28970/0xbb2a80) succeed. 00:17:15.069 21:14:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:15.327 21:14:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:15.327 21:14:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:15.327 21:14:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:15.327 21:14:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:15.586 21:14:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:15.845 [2024-11-26 21:14:03.108740] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:15.845 21:14:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:16.103 21:14:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:17:16.103 21:14:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:17:16.103 21:14:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:16.103 21:14:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:17:17.481 Initializing NVMe Controllers 00:17:17.481 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:17:17.481 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:17:17.481 Initialization complete. Launching workers. 00:17:17.481 ======================================================== 00:17:17.481 Latency(us) 00:17:17.481 Device Information : IOPS MiB/s Average min max 00:17:17.481 PCIE (0000:d8:00.0) NSID 1 from core 0: 106970.80 417.85 298.73 36.60 4283.17 00:17:17.481 ======================================================== 00:17:17.481 Total : 106970.80 417.85 298.73 36.60 4283.17 00:17:17.481 00:17:17.481 21:14:04 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:20.765 Initializing NVMe Controllers 00:17:20.765 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:20.765 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:20.765 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:20.765 Initialization complete. Launching workers. 00:17:20.765 ======================================================== 00:17:20.765 Latency(us) 00:17:20.765 Device Information : IOPS MiB/s Average min max 00:17:20.765 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7072.99 27.63 141.17 44.59 4080.92 00:17:20.765 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5416.99 21.16 184.40 70.64 4099.09 00:17:20.765 ======================================================== 00:17:20.765 Total : 12489.99 48.79 159.92 44.59 4099.09 00:17:20.765 00:17:20.765 21:14:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:24.053 Initializing NVMe Controllers 00:17:24.053 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:24.053 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:24.053 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:24.053 Initialization complete. Launching workers. 00:17:24.053 ======================================================== 00:17:24.053 Latency(us) 00:17:24.053 Device Information : IOPS MiB/s Average min max 00:17:24.053 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19508.00 76.20 1640.36 472.68 5393.41 00:17:24.053 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7979.48 6907.79 9053.58 00:17:24.053 ======================================================== 00:17:24.053 Total : 23540.00 91.95 2726.14 472.68 9053.58 00:17:24.053 00:17:24.053 21:14:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:17:24.053 21:14:11 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:17:28.247 Initializing NVMe Controllers 00:17:28.247 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.247 Controller IO queue size 128, less than required. 00:17:28.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.247 Controller IO queue size 128, less than required. 00:17:28.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.247 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:28.247 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:28.247 Initialization complete. Launching workers. 00:17:28.247 ======================================================== 00:17:28.247 Latency(us) 00:17:28.247 Device Information : IOPS MiB/s Average min max 00:17:28.247 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4175.87 1043.97 30800.23 15377.38 69738.86 00:17:28.247 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4208.86 1052.22 30218.40 13943.95 47823.81 00:17:28.247 ======================================================== 00:17:28.247 Total : 8384.74 2096.18 30508.17 13943.95 69738.86 00:17:28.247 00:17:28.507 21:14:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:17:28.765 No valid NVMe controllers or AIO or URING devices found 00:17:28.765 Initializing NVMe Controllers 00:17:28.765 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.765 Controller IO queue size 128, less than required. 00:17:28.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.765 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:28.765 Controller IO queue size 128, less than required. 00:17:28.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:28.766 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:17:28.766 WARNING: Some requested NVMe devices were skipped 00:17:28.766 21:14:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:17:32.959 Initializing NVMe Controllers 00:17:32.959 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:32.959 Controller IO queue size 128, less than required. 00:17:32.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:32.959 Controller IO queue size 128, less than required. 00:17:32.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:32.959 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:32.959 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:32.959 Initialization complete. Launching workers. 00:17:32.959 00:17:32.959 ==================== 00:17:32.959 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:32.959 RDMA transport: 00:17:32.959 dev name: mlx5_0 00:17:32.959 polls: 434075 00:17:32.959 idle_polls: 430109 00:17:32.959 completions: 47126 00:17:32.959 queued_requests: 1 00:17:32.959 total_send_wrs: 23563 00:17:32.959 send_doorbell_updates: 3731 00:17:32.959 total_recv_wrs: 23690 00:17:32.959 recv_doorbell_updates: 3733 00:17:32.959 --------------------------------- 00:17:32.959 00:17:32.959 ==================== 00:17:32.959 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:32.959 RDMA transport: 00:17:32.959 dev name: mlx5_0 00:17:32.959 polls: 437548 00:17:32.959 idle_polls: 437264 00:17:32.959 completions: 20910 00:17:32.959 queued_requests: 1 00:17:32.959 total_send_wrs: 10455 00:17:32.959 send_doorbell_updates: 253 00:17:32.959 total_recv_wrs: 10582 00:17:32.959 recv_doorbell_updates: 254 00:17:32.959 --------------------------------- 00:17:32.959 ======================================================== 00:17:32.959 Latency(us) 00:17:32.959 Device Information : IOPS MiB/s Average min max 00:17:32.959 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5890.50 1472.62 21780.33 11106.66 51892.19 00:17:32.959 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2613.50 653.37 49049.87 28215.71 71729.65 00:17:32.959 ======================================================== 00:17:32.959 Total : 8504.00 2126.00 30160.97 11106.66 71729.65 00:17:32.959 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.218 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:33.218 rmmod nvme_rdma 00:17:33.477 rmmod nvme_fabrics 00:17:33.477 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.477 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:17:33.477 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:17:33.477 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3931760 ']' 00:17:33.477 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3931760 00:17:33.478 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3931760 ']' 00:17:33.478 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3931760 00:17:33.478 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:17:33.478 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.478 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3931760 00:17:33.478 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.478 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.478 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3931760' 00:17:33.478 killing process with pid 3931760 00:17:33.478 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3931760 00:17:33.478 21:14:20 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3931760 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:37.670 00:17:37.670 real 0m32.099s 00:17:37.670 user 1m46.080s 00:17:37.670 sys 0m5.622s 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:37.670 ************************************ 00:17:37.670 END TEST nvmf_perf 00:17:37.670 ************************************ 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:37.670 ************************************ 00:17:37.670 START TEST nvmf_fio_host 00:17:37.670 ************************************ 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:17:37.670 * Looking for test storage... 00:17:37.670 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:17:37.670 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:37.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.671 --rc genhtml_branch_coverage=1 00:17:37.671 --rc genhtml_function_coverage=1 00:17:37.671 --rc genhtml_legend=1 00:17:37.671 --rc geninfo_all_blocks=1 00:17:37.671 --rc geninfo_unexecuted_blocks=1 00:17:37.671 00:17:37.671 ' 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:37.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.671 --rc genhtml_branch_coverage=1 00:17:37.671 --rc genhtml_function_coverage=1 00:17:37.671 --rc genhtml_legend=1 00:17:37.671 --rc geninfo_all_blocks=1 00:17:37.671 --rc geninfo_unexecuted_blocks=1 00:17:37.671 00:17:37.671 ' 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:37.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.671 --rc genhtml_branch_coverage=1 00:17:37.671 --rc genhtml_function_coverage=1 00:17:37.671 --rc genhtml_legend=1 00:17:37.671 --rc geninfo_all_blocks=1 00:17:37.671 --rc geninfo_unexecuted_blocks=1 00:17:37.671 00:17:37.671 ' 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:37.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.671 --rc genhtml_branch_coverage=1 00:17:37.671 --rc genhtml_function_coverage=1 00:17:37.671 --rc genhtml_legend=1 00:17:37.671 --rc geninfo_all_blocks=1 00:17:37.671 --rc geninfo_unexecuted_blocks=1 00:17:37.671 00:17:37.671 ' 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.671 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:37.671 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.672 21:14:24 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:42.937 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:42.937 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:42.937 Found net devices under 0000:18:00.0: mlx_0_0 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.937 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:42.938 Found net devices under 0000:18:00.1: mlx_0_1 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:42.938 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:42.938 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:42.938 altname enp24s0f0np0 00:17:42.938 altname ens785f0np0 00:17:42.938 inet 192.168.100.8/24 scope global mlx_0_0 00:17:42.938 valid_lft forever preferred_lft forever 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:42.938 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:42.938 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:42.938 altname enp24s0f1np1 00:17:42.938 altname ens785f1np1 00:17:42.938 inet 192.168.100.9/24 scope global mlx_0_1 00:17:42.938 valid_lft forever preferred_lft forever 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:42.938 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:42.939 192.168.100.9' 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:42.939 192.168.100.9' 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:42.939 192.168.100.9' 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3939776 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3939776 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3939776 ']' 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.939 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.197 [2024-11-26 21:14:30.414701] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:17:43.197 [2024-11-26 21:14:30.414755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.197 [2024-11-26 21:14:30.475932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.197 [2024-11-26 21:14:30.517221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.197 [2024-11-26 21:14:30.517262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.197 [2024-11-26 21:14:30.517269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.197 [2024-11-26 21:14:30.517274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.197 [2024-11-26 21:14:30.517279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.197 [2024-11-26 21:14:30.518670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.197 [2024-11-26 21:14:30.518768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.197 [2024-11-26 21:14:30.518851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.197 [2024-11-26 21:14:30.518853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.197 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.197 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:17:43.197 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:43.455 [2024-11-26 21:14:30.784529] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ae7530/0x1aeba20) succeed. 00:17:43.455 [2024-11-26 21:14:30.792722] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ae8bc0/0x1b2d0c0) succeed. 00:17:43.713 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:17:43.713 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:43.713 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.713 21:14:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:43.971 Malloc1 00:17:43.971 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:43.971 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:44.227 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:44.484 [2024-11-26 21:14:31.714040] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:44.484 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:44.765 21:14:31 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:17:45.026 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:17:45.026 fio-3.35 00:17:45.026 Starting 1 thread 00:17:47.550 00:17:47.550 test: (groupid=0, jobs=1): err= 0: pid=3940304: Tue Nov 26 21:14:34 2024 00:17:47.550 read: IOPS=18.9k, BW=73.7MiB/s (77.3MB/s)(148MiB/2003msec) 00:17:47.550 slat (nsec): min=1276, max=27166, avg=1387.06, stdev=374.05 00:17:47.550 clat (usec): min=1653, max=6158, avg=3367.82, stdev=74.55 00:17:47.550 lat (usec): min=1669, max=6160, avg=3369.20, stdev=74.48 00:17:47.550 clat percentiles (usec): 00:17:47.550 | 1.00th=[ 3326], 5.00th=[ 3359], 10.00th=[ 3359], 20.00th=[ 3359], 00:17:47.550 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:17:47.550 | 70.00th=[ 3359], 80.00th=[ 3392], 90.00th=[ 3392], 95.00th=[ 3392], 00:17:47.550 | 99.00th=[ 3425], 99.50th=[ 3458], 99.90th=[ 4113], 99.95th=[ 5342], 00:17:47.550 | 99.99th=[ 6128] 00:17:47.550 bw ( KiB/s): min=73848, max=76040, per=99.98%, avg=75464.00, stdev=1077.68, samples=4 00:17:47.550 iops : min=18462, max=19010, avg=18866.00, stdev=269.42, samples=4 00:17:47.550 write: IOPS=18.9k, BW=73.7MiB/s (77.3MB/s)(148MiB/2003msec); 0 zone resets 00:17:47.550 slat (nsec): min=1314, max=19069, avg=1707.93, stdev=415.04 00:17:47.550 clat (usec): min=1677, max=6183, avg=3366.01, stdev=74.80 00:17:47.550 lat (usec): min=1686, max=6184, avg=3367.72, stdev=74.73 00:17:47.550 clat percentiles (usec): 00:17:47.550 | 1.00th=[ 3326], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3359], 00:17:47.550 | 30.00th=[ 3359], 40.00th=[ 3359], 50.00th=[ 3359], 60.00th=[ 3359], 00:17:47.550 | 70.00th=[ 3359], 80.00th=[ 3392], 90.00th=[ 3392], 95.00th=[ 3392], 00:17:47.550 | 99.00th=[ 3392], 99.50th=[ 3458], 99.90th=[ 4080], 99.95th=[ 5342], 00:17:47.550 | 99.99th=[ 6194] 00:17:47.550 bw ( KiB/s): min=73848, max=76104, per=99.99%, avg=75510.00, stdev=1108.60, samples=4 00:17:47.550 iops : min=18462, max=19026, avg=18877.50, stdev=277.15, samples=4 00:17:47.550 lat (msec) : 2=0.01%, 4=99.87%, 10=0.12% 00:17:47.550 cpu : usr=99.50%, sys=0.10%, ctx=14, majf=0, minf=4 00:17:47.550 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:47.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:47.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:47.550 issued rwts: total=37798,37816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:47.550 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:47.550 00:17:47.550 Run status group 0 (all jobs): 00:17:47.550 READ: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=148MiB (155MB), run=2003-2003msec 00:17:47.550 WRITE: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=148MiB (155MB), run=2003-2003msec 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:17:47.550 21:14:34 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:17:47.550 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:17:47.550 fio-3.35 00:17:47.550 Starting 1 thread 00:17:50.130 00:17:50.130 test: (groupid=0, jobs=1): err= 0: pid=3940852: Tue Nov 26 21:14:37 2024 00:17:50.130 read: IOPS=15.1k, BW=236MiB/s (247MB/s)(461MiB/1957msec) 00:17:50.130 slat (nsec): min=2119, max=40499, avg=2430.95, stdev=943.43 00:17:50.130 clat (usec): min=457, max=8308, avg=1492.10, stdev=1193.47 00:17:50.130 lat (usec): min=459, max=8325, avg=1494.53, stdev=1193.79 00:17:50.130 clat percentiles (usec): 00:17:50.130 | 1.00th=[ 652], 5.00th=[ 734], 10.00th=[ 791], 20.00th=[ 865], 00:17:50.130 | 30.00th=[ 930], 40.00th=[ 1004], 50.00th=[ 1106], 60.00th=[ 1205], 00:17:50.130 | 70.00th=[ 1319], 80.00th=[ 1483], 90.00th=[ 3392], 95.00th=[ 4686], 00:17:50.130 | 99.00th=[ 6063], 99.50th=[ 6587], 99.90th=[ 7111], 99.95th=[ 7308], 00:17:50.130 | 99.99th=[ 8225] 00:17:50.130 bw ( KiB/s): min=105248, max=123360, per=48.23%, avg=116464.00, stdev=7804.61, samples=4 00:17:50.130 iops : min= 6578, max= 7710, avg=7279.00, stdev=487.79, samples=4 00:17:50.130 write: IOPS=8462, BW=132MiB/s (139MB/s)(236MiB/1786msec); 0 zone resets 00:17:50.130 slat (usec): min=24, max=103, avg=28.15, stdev= 5.07 00:17:50.130 clat (usec): min=3731, max=18126, avg=12113.81, stdev=1736.55 00:17:50.130 lat (usec): min=3759, max=18152, avg=12141.96, stdev=1736.18 00:17:50.130 clat percentiles (usec): 00:17:50.130 | 1.00th=[ 6456], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10945], 00:17:50.130 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:17:50.130 | 70.00th=[12911], 80.00th=[13435], 90.00th=[14091], 95.00th=[14746], 00:17:50.130 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:17:50.131 | 99.99th=[17957] 00:17:50.131 bw ( KiB/s): min=108448, max=128384, per=88.61%, avg=119984.00, stdev=8383.61, samples=4 00:17:50.131 iops : min= 6778, max= 8024, avg=7499.00, stdev=523.98, samples=4 00:17:50.131 lat (usec) : 500=0.01%, 750=4.03%, 1000=21.97% 00:17:50.131 lat (msec) : 2=32.47%, 4=1.96%, 10=8.93%, 20=30.63% 00:17:50.131 cpu : usr=96.51%, sys=1.55%, ctx=226, majf=0, minf=3 00:17:50.131 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:17:50.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:50.131 issued rwts: total=29535,15114,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.131 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:50.131 00:17:50.131 Run status group 0 (all jobs): 00:17:50.131 READ: bw=236MiB/s (247MB/s), 236MiB/s-236MiB/s (247MB/s-247MB/s), io=461MiB (484MB), run=1957-1957msec 00:17:50.131 WRITE: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=236MiB (248MB), run=1786-1786msec 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:50.131 rmmod nvme_rdma 00:17:50.131 rmmod nvme_fabrics 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3939776 ']' 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3939776 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3939776 ']' 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3939776 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.131 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3939776 00:17:50.427 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:50.427 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:50.427 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3939776' 00:17:50.427 killing process with pid 3939776 00:17:50.427 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3939776 00:17:50.427 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3939776 00:17:50.427 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:50.427 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:17:50.427 00:17:50.427 real 0m13.136s 00:17:50.427 user 0m51.870s 00:17:50.427 sys 0m5.050s 00:17:50.427 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:50.427 21:14:37 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.427 ************************************ 00:17:50.427 END TEST nvmf_fio_host 00:17:50.427 ************************************ 00:17:50.732 21:14:37 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:17:50.732 21:14:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:50.732 21:14:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:50.732 21:14:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:50.732 ************************************ 00:17:50.732 START TEST nvmf_failover 00:17:50.732 ************************************ 00:17:50.732 21:14:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:17:50.732 * Looking for test storage... 00:17:50.732 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:50.732 21:14:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:50.732 21:14:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:17:50.732 21:14:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:50.732 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:50.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.733 --rc genhtml_branch_coverage=1 00:17:50.733 --rc genhtml_function_coverage=1 00:17:50.733 --rc genhtml_legend=1 00:17:50.733 --rc geninfo_all_blocks=1 00:17:50.733 --rc geninfo_unexecuted_blocks=1 00:17:50.733 00:17:50.733 ' 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:50.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.733 --rc genhtml_branch_coverage=1 00:17:50.733 --rc genhtml_function_coverage=1 00:17:50.733 --rc genhtml_legend=1 00:17:50.733 --rc geninfo_all_blocks=1 00:17:50.733 --rc geninfo_unexecuted_blocks=1 00:17:50.733 00:17:50.733 ' 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:50.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.733 --rc genhtml_branch_coverage=1 00:17:50.733 --rc genhtml_function_coverage=1 00:17:50.733 --rc genhtml_legend=1 00:17:50.733 --rc geninfo_all_blocks=1 00:17:50.733 --rc geninfo_unexecuted_blocks=1 00:17:50.733 00:17:50.733 ' 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:50.733 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:50.733 --rc genhtml_branch_coverage=1 00:17:50.733 --rc genhtml_function_coverage=1 00:17:50.733 --rc genhtml_legend=1 00:17:50.733 --rc geninfo_all_blocks=1 00:17:50.733 --rc geninfo_unexecuted_blocks=1 00:17:50.733 00:17:50.733 ' 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:50.733 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:17:50.733 21:14:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:17:57.294 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:17:57.294 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:57.294 Found net devices under 0000:18:00.0: mlx_0_0 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:57.294 Found net devices under 0000:18:00.1: mlx_0_1 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:57.294 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:57.295 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:57.295 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:17:57.295 altname enp24s0f0np0 00:17:57.295 altname ens785f0np0 00:17:57.295 inet 192.168.100.8/24 scope global mlx_0_0 00:17:57.295 valid_lft forever preferred_lft forever 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:57.295 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:57.295 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:17:57.295 altname enp24s0f1np1 00:17:57.295 altname ens785f1np1 00:17:57.295 inet 192.168.100.9/24 scope global mlx_0_1 00:17:57.295 valid_lft forever preferred_lft forever 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:17:57.295 192.168.100.9' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:17:57.295 192.168.100.9' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:17:57.295 192.168.100.9' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:17:57.295 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3944673 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3944673 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3944673 ']' 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.296 21:14:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:57.296 [2024-11-26 21:14:43.994029] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:17:57.296 [2024-11-26 21:14:43.994076] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.296 [2024-11-26 21:14:44.051998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:57.296 [2024-11-26 21:14:44.089245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.296 [2024-11-26 21:14:44.089288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.296 [2024-11-26 21:14:44.089295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.296 [2024-11-26 21:14:44.089301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.296 [2024-11-26 21:14:44.089305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.296 [2024-11-26 21:14:44.090421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.296 [2024-11-26 21:14:44.090503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.296 [2024-11-26 21:14:44.090505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.296 21:14:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.296 21:14:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:57.296 21:14:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:57.296 21:14:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:57.296 21:14:44 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:57.296 21:14:44 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.296 21:14:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:57.296 [2024-11-26 21:14:44.403561] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e6bcf0/0x1e701e0) succeed. 00:17:57.296 [2024-11-26 21:14:44.411642] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e6d2e0/0x1eb1880) succeed. 00:17:57.296 21:14:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:57.296 Malloc0 00:17:57.296 21:14:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:57.554 21:14:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:57.812 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:57.812 [2024-11-26 21:14:45.230094] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:58.069 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:17:58.069 [2024-11-26 21:14:45.410408] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:58.069 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:17:58.327 [2024-11-26 21:14:45.603053] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:17:58.327 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3945112 00:17:58.327 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:17:58.327 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:58.327 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3945112 /var/tmp/bdevperf.sock 00:17:58.327 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3945112 ']' 00:17:58.327 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.327 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.327 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.327 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.327 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:17:58.585 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.585 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:17:58.585 21:14:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:58.842 NVMe0n1 00:17:58.842 21:14:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:17:59.101 00:17:59.101 21:14:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:59.101 21:14:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3945182 00:17:59.101 21:14:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:00.034 21:14:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:00.291 21:14:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:03.595 21:14:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:03.595 00:18:03.595 21:14:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:03.595 21:14:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:06.872 21:14:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:06.872 [2024-11-26 21:14:54.157310] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:06.872 21:14:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:07.805 21:14:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:18:08.063 21:14:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3945182 00:18:14.623 { 00:18:14.623 "results": [ 00:18:14.623 { 00:18:14.623 "job": "NVMe0n1", 00:18:14.623 "core_mask": "0x1", 00:18:14.623 "workload": "verify", 00:18:14.623 "status": "finished", 00:18:14.623 "verify_range": { 00:18:14.623 "start": 0, 00:18:14.623 "length": 16384 00:18:14.623 }, 00:18:14.623 "queue_depth": 128, 00:18:14.623 "io_size": 4096, 00:18:14.623 "runtime": 15.00418, 00:18:14.623 "iops": 15378.381224432125, 00:18:14.623 "mibps": 60.07180165793799, 00:18:14.623 "io_failed": 3756, 00:18:14.623 "io_timeout": 0, 00:18:14.623 "avg_latency_us": 8169.091114345785, 00:18:14.623 "min_latency_us": 326.162962962963, 00:18:14.623 "max_latency_us": 1031488.0948148149 00:18:14.623 } 00:18:14.623 ], 00:18:14.623 "core_count": 1 00:18:14.623 } 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3945112 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3945112 ']' 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3945112 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3945112 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3945112' 00:18:14.623 killing process with pid 3945112 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3945112 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3945112 00:18:14.623 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:14.623 [2024-11-26 21:14:45.675783] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:18:14.623 [2024-11-26 21:14:45.675837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3945112 ] 00:18:14.623 [2024-11-26 21:14:45.735058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.623 [2024-11-26 21:14:45.773850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.623 Running I/O for 15 seconds... 00:18:14.623 19328.00 IOPS, 75.50 MiB/s [2024-11-26T20:15:02.059Z] 10560.00 IOPS, 41.25 MiB/s [2024-11-26T20:15:02.059Z] [2024-11-26 21:14:48.517411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.623 [2024-11-26 21:14:48.517444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.623 [2024-11-26 21:14:48.517459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.623 [2024-11-26 21:14:48.517466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.623 [2024-11-26 21:14:48.517475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.623 [2024-11-26 21:14:48.517481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.623 [2024-11-26 21:14:48.517489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.623 [2024-11-26 21:14:48.517495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.623 [2024-11-26 21:14:48.517503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.623 [2024-11-26 21:14:48.517509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.623 [2024-11-26 21:14:48.517516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.623 [2024-11-26 21:14:48.517522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.623 [2024-11-26 21:14:48.517530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.623 [2024-11-26 21:14:48.517536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.623 [2024-11-26 21:14:48.517543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.623 [2024-11-26 21:14:48.517549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.623 [2024-11-26 21:14:48.517556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.623 [2024-11-26 21:14:48.517562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.623 [2024-11-26 21:14:48.517569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.517986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.517993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.518000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.518005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.518013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.518018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.518026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.518032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.518039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.518045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.518052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.518058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.518065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.518072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.518079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.518085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.518092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.624 [2024-11-26 21:14:48.518099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.624 [2024-11-26 21:14:48.518106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.625 [2024-11-26 21:14:48.518639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.625 [2024-11-26 21:14:48.518645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.626 [2024-11-26 21:14:48.518659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.626 [2024-11-26 21:14:48.518673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.626 [2024-11-26 21:14:48.518686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.626 [2024-11-26 21:14:48.518701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.626 [2024-11-26 21:14:48.518714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.626 [2024-11-26 21:14:48.518727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.626 [2024-11-26 21:14:48.518741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.518990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.518996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.626 [2024-11-26 21:14:48.519154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x183500 00:18:14.626 [2024-11-26 21:14:48.519160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:48.519167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x183500 00:18:14.627 [2024-11-26 21:14:48.519173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:48.519181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x183500 00:18:14.627 [2024-11-26 21:14:48.519187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:48.520953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.627 [2024-11-26 21:14:48.520964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.627 [2024-11-26 21:14:48.520970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38144 len:8 PRP1 0x0 PRP2 0x0 00:18:14.627 [2024-11-26 21:14:48.520977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:48.521014] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:18:14.627 [2024-11-26 21:14:48.521023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:14.627 [2024-11-26 21:14:48.523596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:14.627 [2024-11-26 21:14:48.537500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:18:14.627 [2024-11-26 21:14:48.563061] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:14.627 12544.00 IOPS, 49.00 MiB/s [2024-11-26T20:15:02.063Z] 14224.00 IOPS, 55.56 MiB/s [2024-11-26T20:15:02.063Z] 13424.80 IOPS, 52.44 MiB/s [2024-11-26T20:15:02.063Z] [2024-11-26 21:14:51.978481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.627 [2024-11-26 21:14:51.978516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.978526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.627 [2024-11-26 21:14:51.978532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.978539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.627 [2024-11-26 21:14:51.978546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.978553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.627 [2024-11-26 21:14:51.978558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:18:14.627 [2024-11-26 21:14:51.980250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:18:14.627 [2024-11-26 21:14:51.980262] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:18:14.627 [2024-11-26 21:14:51.980268] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] already in failed state 00:18:14.627 [2024-11-26 21:14:51.980283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x180f00 00:18:14.627 [2024-11-26 21:14:51.980688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.627 [2024-11-26 21:14:51.980722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.627 [2024-11-26 21:14:51.980755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.627 [2024-11-26 21:14:51.980788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.627 [2024-11-26 21:14:51.980820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.627 [2024-11-26 21:14:51.980853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.627 [2024-11-26 21:14:51.980886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.627 [2024-11-26 21:14:51.980921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.627 [2024-11-26 21:14:51.980947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.627 [2024-11-26 21:14:51.980954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.980980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.980987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.981757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.981981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.981988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.982013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.628 [2024-11-26 21:14:51.982020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.982046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.982053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.982080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.982088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.982115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.982122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.982149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.982156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.628 [2024-11-26 21:14:51.982182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x180f00 00:18:14.628 [2024-11-26 21:14:51.982189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x180f00 00:18:14.629 [2024-11-26 21:14:51.982222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x180f00 00:18:14.629 [2024-11-26 21:14:51.982259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x180f00 00:18:14.629 [2024-11-26 21:14:51.982293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.982979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.982986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.983019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.983052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.983085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x180f00 00:18:14.629 [2024-11-26 21:14:51.983117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x180f00 00:18:14.629 [2024-11-26 21:14:51.983151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x180f00 00:18:14.629 [2024-11-26 21:14:51.983185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x180f00 00:18:14.629 [2024-11-26 21:14:51.983218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x180f00 00:18:14.629 [2024-11-26 21:14:51.983262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x180f00 00:18:14.629 [2024-11-26 21:14:51.983300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x180f00 00:18:14.629 [2024-11-26 21:14:51.983334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x180f00 00:18:14.629 [2024-11-26 21:14:51.983367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.983401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.983434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.629 [2024-11-26 21:14:51.983467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.629 [2024-11-26 21:14:51.983493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.983500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.983532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.983565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.983598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.983631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.983664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.983698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.983732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.983765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.983799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.983833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.983866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fe000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.983900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.983933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.983966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.983991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.983998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.984031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.984064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.984098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.984131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.984163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.984196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.984231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.984269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.984303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.984336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.984370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.984404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x180f00 00:18:14.630 [2024-11-26 21:14:51.984438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.984471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.984505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.984531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:51.984538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.997998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.630 [2024-11-26 21:14:51.998017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.630 [2024-11-26 21:14:51.998024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21584 len:8 PRP1 0x0 PRP2 0x0 00:18:14.630 [2024-11-26 21:14:51.998032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:51.998095] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Unable to perform failover, already in progress. 00:18:14.630 [2024-11-26 21:14:51.998121] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Unable to perform failover, already in progress. 00:18:14.630 [2024-11-26 21:14:52.000666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:18:14.630 [2024-11-26 21:14:52.037166] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:18:14.630 12462.67 IOPS, 48.68 MiB/s [2024-11-26T20:15:02.066Z] 13479.43 IOPS, 52.65 MiB/s [2024-11-26T20:15:02.066Z] 14241.50 IOPS, 55.63 MiB/s [2024-11-26T20:15:02.066Z] 14636.00 IOPS, 57.17 MiB/s [2024-11-26T20:15:02.066Z] [2024-11-26 21:14:56.351938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:56.351972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:56.351987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:26744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.630 [2024-11-26 21:14:56.351994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.630 [2024-11-26 21:14:56.352002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:26808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:26856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x183500 00:18:14.631 [2024-11-26 21:14:56.352249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x183500 00:18:14.631 [2024-11-26 21:14:56.352267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x183500 00:18:14.631 [2024-11-26 21:14:56.352280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:26464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x183500 00:18:14.631 [2024-11-26 21:14:56.352294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x183500 00:18:14.631 [2024-11-26 21:14:56.352308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x183500 00:18:14.631 [2024-11-26 21:14:56.352322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fa000 len:0x1000 key:0x183500 00:18:14.631 [2024-11-26 21:14:56.352335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fc000 len:0x1000 key:0x183500 00:18:14.631 [2024-11-26 21:14:56.352349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.631 [2024-11-26 21:14:56.352530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.631 [2024-11-26 21:14:56.352536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:27000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:27016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:27024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:27040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:27064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:27096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:27104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:27112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:27120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:26504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183500 00:18:14.632 [2024-11-26 21:14:56.352788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183500 00:18:14.632 [2024-11-26 21:14:56.352802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x183500 00:18:14.632 [2024-11-26 21:14:56.352815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183500 00:18:14.632 [2024-11-26 21:14:56.352828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:27144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:27160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:27168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:27176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:27184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:27192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:27200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:27216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:27232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.632 [2024-11-26 21:14:56.352992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.632 [2024-11-26 21:14:56.352999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:27248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:27256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:26560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:26576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:27280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:27344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:27368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.633 [2024-11-26 21:14:56.353400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:26632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:26640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x183500 00:18:14.633 [2024-11-26 21:14:56.353507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.633 [2024-11-26 21:14:56.353515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.634 [2024-11-26 21:14:56.353521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.634 [2024-11-26 21:14:56.353534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.634 [2024-11-26 21:14:56.353549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.634 [2024-11-26 21:14:56.353562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.634 [2024-11-26 21:14:56.353575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:27424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.634 [2024-11-26 21:14:56.353588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:27432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.634 [2024-11-26 21:14:56.353601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:27440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.634 [2024-11-26 21:14:56.353613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:27448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.634 [2024-11-26 21:14:56.353627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:27456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:14.634 [2024-11-26 21:14:56.353640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x183500 00:18:14.634 [2024-11-26 21:14:56.353653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:26704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183500 00:18:14.634 [2024-11-26 21:14:56.353666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183500 00:18:14.634 [2024-11-26 21:14:56.353679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.353686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:26720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x183500 00:18:14.634 [2024-11-26 21:14:56.353692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:ecb89000 sqhd:7250 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.355549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:14.634 [2024-11-26 21:14:56.355560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:14.634 [2024-11-26 21:14:56.355566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26728 len:8 PRP1 0x0 PRP2 0x0 00:18:14.634 [2024-11-26 21:14:56.355572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.355608] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:18:14.634 [2024-11-26 21:14:56.355617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:18:14.634 [2024-11-26 21:14:56.355635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.634 [2024-11-26 21:14:56.355642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:2481ab0 sqhd:c850 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.355650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.634 [2024-11-26 21:14:56.355656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:2481ab0 sqhd:c850 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.355663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.634 [2024-11-26 21:14:56.355668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:2481ab0 sqhd:c850 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.355675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:14.634 [2024-11-26 21:14:56.355681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:2481ab0 sqhd:c850 p:0 m:0 dnr:0 00:18:14.634 [2024-11-26 21:14:56.368872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:18:14.634 [2024-11-26 21:14:56.368891] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:18:14.634 [2024-11-26 21:14:56.368901] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Unable to perform failover, already in progress. 00:18:14.634 [2024-11-26 21:14:56.371442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:18:14.634 [2024-11-26 21:14:56.399209] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:18:14.634 13248.70 IOPS, 51.75 MiB/s [2024-11-26T20:15:02.070Z] 13830.91 IOPS, 54.03 MiB/s [2024-11-26T20:15:02.070Z] 14313.08 IOPS, 55.91 MiB/s [2024-11-26T20:15:02.070Z] 14725.85 IOPS, 57.52 MiB/s [2024-11-26T20:15:02.070Z] 15073.43 IOPS, 58.88 MiB/s [2024-11-26T20:15:02.070Z] 15379.27 IOPS, 60.08 MiB/s 00:18:14.634 Latency(us) 00:18:14.634 [2024-11-26T20:15:02.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.634 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:14.634 Verification LBA range: start 0x0 length 0x4000 00:18:14.634 NVMe0n1 : 15.00 15378.38 60.07 250.33 0.00 8169.09 326.16 1031488.09 00:18:14.634 [2024-11-26T20:15:02.070Z] =================================================================================================================== 00:18:14.634 [2024-11-26T20:15:02.070Z] Total : 15378.38 60.07 250.33 0.00 8169.09 326.16 1031488.09 00:18:14.634 Received shutdown signal, test time was about 15.000000 seconds 00:18:14.634 00:18:14.634 Latency(us) 00:18:14.634 [2024-11-26T20:15:02.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.634 [2024-11-26T20:15:02.070Z] =================================================================================================================== 00:18:14.634 [2024-11-26T20:15:02.070Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3948092 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3948092 /var/tmp/bdevperf.sock 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3948092 ']' 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:18:14.634 21:15:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:14.892 [2024-11-26 21:15:02.089544] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:14.892 21:15:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:18:14.892 [2024-11-26 21:15:02.270170] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:18:14.892 21:15:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:15.150 NVMe0n1 00:18:15.150 21:15:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:15.407 00:18:15.407 21:15:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:18:15.665 00:18:15.665 21:15:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:15.665 21:15:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:15.922 21:15:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:16.180 21:15:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:19.457 21:15:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:19.457 21:15:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:19.457 21:15:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3949490 00:18:19.457 21:15:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:19.458 21:15:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3949490 00:18:20.390 { 00:18:20.390 "results": [ 00:18:20.390 { 00:18:20.390 "job": "NVMe0n1", 00:18:20.390 "core_mask": "0x1", 00:18:20.390 "workload": "verify", 00:18:20.390 "status": "finished", 00:18:20.390 "verify_range": { 00:18:20.390 "start": 0, 00:18:20.390 "length": 16384 00:18:20.390 }, 00:18:20.390 "queue_depth": 128, 00:18:20.390 "io_size": 4096, 00:18:20.390 "runtime": 1.009569, 00:18:20.390 "iops": 19240.883981184048, 00:18:20.390 "mibps": 75.15970305150019, 00:18:20.390 "io_failed": 0, 00:18:20.390 "io_timeout": 0, 00:18:20.390 "avg_latency_us": 6614.964362915297, 00:18:20.390 "min_latency_us": 1856.8533333333332, 00:18:20.390 "max_latency_us": 18155.89925925926 00:18:20.390 } 00:18:20.390 ], 00:18:20.390 "core_count": 1 00:18:20.390 } 00:18:20.390 21:15:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:20.390 [2024-11-26 21:15:01.756779] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:18:20.390 [2024-11-26 21:15:01.756827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3948092 ] 00:18:20.390 [2024-11-26 21:15:01.815478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.390 [2024-11-26 21:15:01.854728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.390 [2024-11-26 21:15:03.389538] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:18:20.390 [2024-11-26 21:15:03.390110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:18:20.390 [2024-11-26 21:15:03.390138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:18:20.390 [2024-11-26 21:15:03.410142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:18:20.390 [2024-11-26 21:15:03.426013] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:18:20.390 Running I/O for 1 seconds... 00:18:20.390 19200.00 IOPS, 75.00 MiB/s 00:18:20.390 Latency(us) 00:18:20.390 [2024-11-26T20:15:07.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.390 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:20.390 Verification LBA range: start 0x0 length 0x4000 00:18:20.390 NVMe0n1 : 1.01 19240.88 75.16 0.00 0.00 6614.96 1856.85 18155.90 00:18:20.390 [2024-11-26T20:15:07.826Z] =================================================================================================================== 00:18:20.390 [2024-11-26T20:15:07.826Z] Total : 19240.88 75.16 0.00 0.00 6614.96 1856.85 18155.90 00:18:20.390 21:15:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:20.390 21:15:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:20.649 21:15:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:20.907 21:15:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:20.907 21:15:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:20.907 21:15:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:21.164 21:15:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3948092 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3948092 ']' 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3948092 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3948092 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3948092' 00:18:24.451 killing process with pid 3948092 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3948092 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3948092 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:24.451 21:15:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:24.708 rmmod nvme_rdma 00:18:24.708 rmmod nvme_fabrics 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3944673 ']' 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3944673 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3944673 ']' 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3944673 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.708 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3944673 00:18:24.966 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:24.966 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:24.966 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3944673' 00:18:24.966 killing process with pid 3944673 00:18:24.966 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3944673 00:18:24.966 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3944673 00:18:24.966 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:24.966 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:24.966 00:18:24.966 real 0m34.501s 00:18:24.966 user 1m56.600s 00:18:24.966 sys 0m6.207s 00:18:24.966 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.966 21:15:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:24.966 ************************************ 00:18:24.966 END TEST nvmf_failover 00:18:24.966 ************************************ 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.223 ************************************ 00:18:25.223 START TEST nvmf_host_discovery 00:18:25.223 ************************************ 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:18:25.223 * Looking for test storage... 00:18:25.223 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.223 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:25.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.224 --rc genhtml_branch_coverage=1 00:18:25.224 --rc genhtml_function_coverage=1 00:18:25.224 --rc genhtml_legend=1 00:18:25.224 --rc geninfo_all_blocks=1 00:18:25.224 --rc geninfo_unexecuted_blocks=1 00:18:25.224 00:18:25.224 ' 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:25.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.224 --rc genhtml_branch_coverage=1 00:18:25.224 --rc genhtml_function_coverage=1 00:18:25.224 --rc genhtml_legend=1 00:18:25.224 --rc geninfo_all_blocks=1 00:18:25.224 --rc geninfo_unexecuted_blocks=1 00:18:25.224 00:18:25.224 ' 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:25.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.224 --rc genhtml_branch_coverage=1 00:18:25.224 --rc genhtml_function_coverage=1 00:18:25.224 --rc genhtml_legend=1 00:18:25.224 --rc geninfo_all_blocks=1 00:18:25.224 --rc geninfo_unexecuted_blocks=1 00:18:25.224 00:18:25.224 ' 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:25.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.224 --rc genhtml_branch_coverage=1 00:18:25.224 --rc genhtml_function_coverage=1 00:18:25.224 --rc genhtml_legend=1 00:18:25.224 --rc geninfo_all_blocks=1 00:18:25.224 --rc geninfo_unexecuted_blocks=1 00:18:25.224 00:18:25.224 ' 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.224 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.224 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:18:25.482 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:18:25.482 00:18:25.482 real 0m0.197s 00:18:25.482 user 0m0.116s 00:18:25.482 sys 0m0.095s 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:25.482 ************************************ 00:18:25.482 END TEST nvmf_host_discovery 00:18:25.482 ************************************ 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:25.482 ************************************ 00:18:25.482 START TEST nvmf_host_multipath_status 00:18:25.482 ************************************ 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:18:25.482 * Looking for test storage... 00:18:25.482 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.482 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:25.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.483 --rc genhtml_branch_coverage=1 00:18:25.483 --rc genhtml_function_coverage=1 00:18:25.483 --rc genhtml_legend=1 00:18:25.483 --rc geninfo_all_blocks=1 00:18:25.483 --rc geninfo_unexecuted_blocks=1 00:18:25.483 00:18:25.483 ' 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:25.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.483 --rc genhtml_branch_coverage=1 00:18:25.483 --rc genhtml_function_coverage=1 00:18:25.483 --rc genhtml_legend=1 00:18:25.483 --rc geninfo_all_blocks=1 00:18:25.483 --rc geninfo_unexecuted_blocks=1 00:18:25.483 00:18:25.483 ' 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:25.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.483 --rc genhtml_branch_coverage=1 00:18:25.483 --rc genhtml_function_coverage=1 00:18:25.483 --rc genhtml_legend=1 00:18:25.483 --rc geninfo_all_blocks=1 00:18:25.483 --rc geninfo_unexecuted_blocks=1 00:18:25.483 00:18:25.483 ' 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:25.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.483 --rc genhtml_branch_coverage=1 00:18:25.483 --rc genhtml_function_coverage=1 00:18:25.483 --rc genhtml_legend=1 00:18:25.483 --rc geninfo_all_blocks=1 00:18:25.483 --rc geninfo_unexecuted_blocks=1 00:18:25.483 00:18:25.483 ' 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.483 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.484 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:25.484 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:25.484 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:25.484 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:25.484 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:18:25.741 21:15:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.008 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:18:31.009 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:18:31.009 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:31.009 Found net devices under 0000:18:00.0: mlx_0_0 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:31.009 Found net devices under 0000:18:00.1: mlx_0_1 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:31.009 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:31.268 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:31.268 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:18:31.268 altname enp24s0f0np0 00:18:31.268 altname ens785f0np0 00:18:31.268 inet 192.168.100.8/24 scope global mlx_0_0 00:18:31.268 valid_lft forever preferred_lft forever 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:31.268 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:31.268 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:18:31.268 altname enp24s0f1np1 00:18:31.268 altname ens785f1np1 00:18:31.268 inet 192.168.100.9/24 scope global mlx_0_1 00:18:31.268 valid_lft forever preferred_lft forever 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:31.268 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:31.269 192.168.100.9' 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:31.269 192.168.100.9' 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:31.269 192.168.100.9' 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3953861 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3953861 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3953861 ']' 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.269 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:31.269 [2024-11-26 21:15:18.611230] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:18:31.269 [2024-11-26 21:15:18.611280] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.269 [2024-11-26 21:15:18.668266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:31.527 [2024-11-26 21:15:18.707265] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.527 [2024-11-26 21:15:18.707297] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.527 [2024-11-26 21:15:18.707303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.527 [2024-11-26 21:15:18.707309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.527 [2024-11-26 21:15:18.707313] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.527 [2024-11-26 21:15:18.708385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.527 [2024-11-26 21:15:18.708388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.528 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.528 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:31.528 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.528 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.528 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:31.528 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.528 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3953861 00:18:31.528 21:15:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:31.786 [2024-11-26 21:15:19.005217] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1852e70/0x1857360) succeed. 00:18:31.786 [2024-11-26 21:15:19.013154] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18543c0/0x1898a00) succeed. 00:18:31.786 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:32.043 Malloc0 00:18:32.043 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:18:32.043 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:32.299 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:32.557 [2024-11-26 21:15:19.793938] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:32.557 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:18:32.557 [2024-11-26 21:15:19.954138] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:18:32.557 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:18:32.557 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3954143 00:18:32.557 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.557 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3954143 /var/tmp/bdevperf.sock 00:18:32.557 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3954143 ']' 00:18:32.557 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.557 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.557 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.557 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.557 21:15:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:18:32.815 21:15:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.815 21:15:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:18:32.815 21:15:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:33.072 21:15:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:33.329 Nvme0n1 00:18:33.329 21:15:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:33.586 Nvme0n1 00:18:33.586 21:15:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:18:33.586 21:15:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:35.482 21:15:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:18:35.482 21:15:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:18:35.739 21:15:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:18:35.997 21:15:23 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:18:36.928 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:18:36.928 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:36.928 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:36.928 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:37.186 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.186 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:37.186 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.186 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:37.186 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:37.186 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:37.186 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.186 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:37.443 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.443 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:37.443 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.443 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:37.700 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.700 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:37.700 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.700 21:15:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:37.700 21:15:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.700 21:15:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:37.700 21:15:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:37.700 21:15:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:37.957 21:15:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:37.957 21:15:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:18:37.957 21:15:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:38.214 21:15:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:18:38.214 21:15:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.585 21:15:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:39.842 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:39.842 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:39.842 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:39.842 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:40.099 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.099 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:40.099 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.099 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:40.099 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.099 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:40.099 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:40.099 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:40.355 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:40.355 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:18:40.355 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:40.613 21:15:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:18:40.613 21:15:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:41.983 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:42.239 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:42.239 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:42.239 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:42.239 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:42.496 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:42.496 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:42.496 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:42.496 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:42.496 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:42.496 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:42.496 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:42.496 21:15:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:42.753 21:15:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:42.753 21:15:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:18:42.753 21:15:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:43.010 21:15:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:18:43.267 21:15:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:18:44.198 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:18:44.198 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:44.198 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.198 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:44.456 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.456 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:44.456 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:44.456 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.456 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:44.456 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:44.456 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.456 21:15:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:44.713 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.713 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:44.713 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.713 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:44.970 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.970 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:44.970 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.970 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:44.970 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:44.970 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:44.970 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:44.970 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:45.251 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:45.251 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:18:45.251 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:18:45.562 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:18:45.562 21:15:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:18:46.568 21:15:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:18:46.568 21:15:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:46.568 21:15:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.568 21:15:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:46.825 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:46.825 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:46.825 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:46.825 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:47.081 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:47.081 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:47.081 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.081 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:47.081 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.081 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:47.081 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.081 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:47.338 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:47.338 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:47.338 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.338 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:47.596 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:47.596 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:47.596 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:47.596 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:47.596 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:47.596 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:18:47.596 21:15:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:18:47.853 21:15:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:18:48.110 21:15:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:18:49.041 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:18:49.041 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:49.041 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.041 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:49.298 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:49.298 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:49.298 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.298 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:49.298 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.298 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:49.298 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.298 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:49.555 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.556 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:49.556 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.556 21:15:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:49.813 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:49.813 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:18:49.813 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:49.813 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:50.071 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:50.071 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:50.071 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:50.071 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:50.071 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:50.071 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:18:50.329 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:18:50.329 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:18:50.586 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:18:50.586 21:15:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:51.958 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.215 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.215 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:52.215 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.215 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:52.472 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.472 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:52.472 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.472 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:52.472 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.472 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:52.472 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:52.472 21:15:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:52.731 21:15:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:52.731 21:15:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:18:52.731 21:15:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:52.993 21:15:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:18:52.993 21:15:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:18:54.363 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:18:54.363 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:18:54.363 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.363 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:54.363 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:54.363 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:54.363 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.363 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:54.363 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.363 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:54.363 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.621 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:54.621 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.621 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:54.621 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.621 21:15:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:54.879 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:54.879 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:54.879 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:54.879 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:55.136 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.136 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:55.136 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:55.136 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:55.136 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:55.136 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:18:55.136 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:55.393 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:18:55.650 21:15:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:18:56.579 21:15:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:18:56.579 21:15:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:56.579 21:15:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.579 21:15:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:56.836 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.836 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:18:56.836 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.836 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:56.836 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:56.836 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:56.836 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:56.836 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:57.093 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.093 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:57.093 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.093 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:57.350 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.350 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:57.350 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.350 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:57.350 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.350 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:18:57.350 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:18:57.350 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:57.607 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:57.607 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:18:57.607 21:15:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:18:57.865 21:15:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:18:58.122 21:15:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:18:59.052 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:18:59.052 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:18:59.052 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.052 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:18:59.309 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.309 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:18:59.309 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.309 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:18:59.309 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:18:59.309 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:18:59.309 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:18:59.309 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.566 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.566 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:18:59.566 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:18:59.566 21:15:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.823 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.823 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:18:59.823 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.823 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:18:59.823 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:18:59.823 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:18:59.823 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:18:59.823 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3954143 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3954143 ']' 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3954143 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3954143 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3954143' 00:19:00.080 killing process with pid 3954143 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3954143 00:19:00.080 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3954143 00:19:00.080 { 00:19:00.080 "results": [ 00:19:00.080 { 00:19:00.080 "job": "Nvme0n1", 00:19:00.080 "core_mask": "0x4", 00:19:00.080 "workload": "verify", 00:19:00.080 "status": "terminated", 00:19:00.080 "verify_range": { 00:19:00.080 "start": 0, 00:19:00.080 "length": 16384 00:19:00.080 }, 00:19:00.080 "queue_depth": 128, 00:19:00.080 "io_size": 4096, 00:19:00.080 "runtime": 26.443376, 00:19:00.080 "iops": 17028.08294977162, 00:19:00.080 "mibps": 66.51594902254539, 00:19:00.080 "io_failed": 0, 00:19:00.080 "io_timeout": 0, 00:19:00.081 "avg_latency_us": 7496.657516567469, 00:19:00.081 "min_latency_us": 491.52, 00:19:00.081 "max_latency_us": 3019898.88 00:19:00.081 } 00:19:00.081 ], 00:19:00.081 "core_count": 1 00:19:00.081 } 00:19:00.340 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3954143 00:19:00.340 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:00.340 [2024-11-26 21:15:20.001737] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:19:00.340 [2024-11-26 21:15:20.001791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3954143 ] 00:19:00.340 [2024-11-26 21:15:20.060681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.340 [2024-11-26 21:15:20.102781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:00.340 Running I/O for 90 seconds... 00:19:00.340 19712.00 IOPS, 77.00 MiB/s [2024-11-26T20:15:47.776Z] 19902.50 IOPS, 77.74 MiB/s [2024-11-26T20:15:47.776Z] 19925.33 IOPS, 77.83 MiB/s [2024-11-26T20:15:47.776Z] 19960.75 IOPS, 77.97 MiB/s [2024-11-26T20:15:47.776Z] 19968.00 IOPS, 78.00 MiB/s [2024-11-26T20:15:47.776Z] 20006.33 IOPS, 78.15 MiB/s [2024-11-26T20:15:47.776Z] 20019.00 IOPS, 78.20 MiB/s [2024-11-26T20:15:47.776Z] 20016.00 IOPS, 78.19 MiB/s [2024-11-26T20:15:47.776Z] 19991.33 IOPS, 78.09 MiB/s [2024-11-26T20:15:47.776Z] 19968.30 IOPS, 78.00 MiB/s [2024-11-26T20:15:47.776Z] 19971.09 IOPS, 78.01 MiB/s [2024-11-26T20:15:47.776Z] [2024-11-26 21:15:32.702624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x183100 00:19:00.340 [2024-11-26 21:15:32.702660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:19:00.340 [2024-11-26 21:15:32.702696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x183100 00:19:00.340 [2024-11-26 21:15:32.702704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.702991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.702998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:00.341 [2024-11-26 21:15:32.703250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x183100 00:19:00.341 [2024-11-26 21:15:32.703261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:39416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x183100 00:19:00.342 [2024-11-26 21:15:32.703810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:00.342 [2024-11-26 21:15:32.703819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.703990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.703998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.343 [2024-11-26 21:15:32.704137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.343 [2024-11-26 21:15:32.704152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.343 [2024-11-26 21:15:32.704167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d0000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d2000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d4000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043da000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:00.343 [2024-11-26 21:15:32.704921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x183100 00:19:00.343 [2024-11-26 21:15:32.704929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.704942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e0000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.704949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.704962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e2000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.704968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.704981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.704987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.705001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.705007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.705020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e8000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.705026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.705040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ea000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.705046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.705059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.705065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.705078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.705085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.705098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.705104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.705118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f2000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.705124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.705137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.705145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.713314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f6000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.713323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.713338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:32.713346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.713360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.344 [2024-11-26 21:15:32.713366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.713380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.344 [2024-11-26 21:15:32.713386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.713400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.344 [2024-11-26 21:15:32.713406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:32.713420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.344 [2024-11-26 21:15:32.713426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:19:00.344 19520.58 IOPS, 76.25 MiB/s [2024-11-26T20:15:47.780Z] 18019.00 IOPS, 70.39 MiB/s [2024-11-26T20:15:47.780Z] 16731.93 IOPS, 65.36 MiB/s [2024-11-26T20:15:47.780Z] 15971.73 IOPS, 62.39 MiB/s [2024-11-26T20:15:47.780Z] 16231.31 IOPS, 63.40 MiB/s [2024-11-26T20:15:47.780Z] 16422.94 IOPS, 64.15 MiB/s [2024-11-26T20:15:47.780Z] 16402.67 IOPS, 64.07 MiB/s [2024-11-26T20:15:47.780Z] 16383.21 IOPS, 64.00 MiB/s [2024-11-26T20:15:47.780Z] 16524.95 IOPS, 64.55 MiB/s [2024-11-26T20:15:47.780Z] 16709.14 IOPS, 65.27 MiB/s [2024-11-26T20:15:47.780Z] 16846.27 IOPS, 65.81 MiB/s [2024-11-26T20:15:47.780Z] 16814.70 IOPS, 65.68 MiB/s [2024-11-26T20:15:47.780Z] 16780.42 IOPS, 65.55 MiB/s [2024-11-26T20:15:47.780Z] [2024-11-26 21:15:45.287385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.287421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.287464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.287472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.287481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:111984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.287488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.287497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.344 [2024-11-26 21:15:45.287503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.287519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:112536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.344 [2024-11-26 21:15:45.287526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.287535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.344 [2024-11-26 21:15:45.287541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.287550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f0000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.287556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.287564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:112048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.287570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.287579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.287585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.288003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.344 [2024-11-26 21:15:45.288011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.288020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.288026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.288035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.288041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.288049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.288056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.288064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.344 [2024-11-26 21:15:45.288070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.288079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.288085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.288093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.288101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:00.344 [2024-11-26 21:15:45.288110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x183100 00:19:00.344 [2024-11-26 21:15:45.288116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:111976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:112600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:112648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:112136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:112200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:112344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f8000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:112760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:112408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x183100 00:19:00.345 [2024-11-26 21:15:45.288608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:112800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:00.345 [2024-11-26 21:15:45.288630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:112808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.345 [2024-11-26 21:15:45.288636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x183100 00:19:00.346 [2024-11-26 21:15:45.288650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:112272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x183100 00:19:00.346 [2024-11-26 21:15:45.288666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:112288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x183100 00:19:00.346 [2024-11-26 21:15:45.288680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x183100 00:19:00.346 [2024-11-26 21:15:45.288696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.346 [2024-11-26 21:15:45.288711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.346 [2024-11-26 21:15:45.288726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:112872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.346 [2024-11-26 21:15:45.288740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x183100 00:19:00.346 [2024-11-26 21:15:45.288754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:112384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x183100 00:19:00.346 [2024-11-26 21:15:45.288769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.346 [2024-11-26 21:15:45.288784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:112896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.346 [2024-11-26 21:15:45.288798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.346 [2024-11-26 21:15:45.288812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:112440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x183100 00:19:00.346 [2024-11-26 21:15:45.288827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.346 [2024-11-26 21:15:45.288841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x183100 00:19:00.346 [2024-11-26 21:15:45.288855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.346 [2024-11-26 21:15:45.288871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:00.346 [2024-11-26 21:15:45.288879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x183100 00:19:00.346 [2024-11-26 21:15:45.288886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:00.346 16862.48 IOPS, 65.87 MiB/s [2024-11-26T20:15:47.782Z] 16985.92 IOPS, 66.35 MiB/s [2024-11-26T20:15:47.782Z] Received shutdown signal, test time was about 26.443979 seconds 00:19:00.346 00:19:00.346 Latency(us) 00:19:00.346 [2024-11-26T20:15:47.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.346 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:00.346 Verification LBA range: start 0x0 length 0x4000 00:19:00.346 Nvme0n1 : 26.44 17028.08 66.52 0.00 0.00 7496.66 491.52 3019898.88 00:19:00.346 [2024-11-26T20:15:47.782Z] =================================================================================================================== 00:19:00.346 [2024-11-26T20:15:47.782Z] Total : 17028.08 66.52 0.00 0.00 7496.66 491.52 3019898.88 00:19:00.346 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:00.603 rmmod nvme_rdma 00:19:00.603 rmmod nvme_fabrics 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3953861 ']' 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3953861 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3953861 ']' 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3953861 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3953861 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3953861' 00:19:00.603 killing process with pid 3953861 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3953861 00:19:00.603 21:15:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3953861 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:00.861 00:19:00.861 real 0m35.400s 00:19:00.861 user 1m42.794s 00:19:00.861 sys 0m7.517s 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:00.861 ************************************ 00:19:00.861 END TEST nvmf_host_multipath_status 00:19:00.861 ************************************ 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:00.861 ************************************ 00:19:00.861 START TEST nvmf_discovery_remove_ifc 00:19:00.861 ************************************ 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:19:00.861 * Looking for test storage... 00:19:00.861 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:19:00.861 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:19:01.120 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:01.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.121 --rc genhtml_branch_coverage=1 00:19:01.121 --rc genhtml_function_coverage=1 00:19:01.121 --rc genhtml_legend=1 00:19:01.121 --rc geninfo_all_blocks=1 00:19:01.121 --rc geninfo_unexecuted_blocks=1 00:19:01.121 00:19:01.121 ' 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:01.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.121 --rc genhtml_branch_coverage=1 00:19:01.121 --rc genhtml_function_coverage=1 00:19:01.121 --rc genhtml_legend=1 00:19:01.121 --rc geninfo_all_blocks=1 00:19:01.121 --rc geninfo_unexecuted_blocks=1 00:19:01.121 00:19:01.121 ' 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:01.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.121 --rc genhtml_branch_coverage=1 00:19:01.121 --rc genhtml_function_coverage=1 00:19:01.121 --rc genhtml_legend=1 00:19:01.121 --rc geninfo_all_blocks=1 00:19:01.121 --rc geninfo_unexecuted_blocks=1 00:19:01.121 00:19:01.121 ' 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:01.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.121 --rc genhtml_branch_coverage=1 00:19:01.121 --rc genhtml_function_coverage=1 00:19:01.121 --rc genhtml_legend=1 00:19:01.121 --rc geninfo_all_blocks=1 00:19:01.121 --rc geninfo_unexecuted_blocks=1 00:19:01.121 00:19:01.121 ' 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:01.121 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:01.121 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:19:01.121 00:19:01.121 real 0m0.201s 00:19:01.121 user 0m0.124s 00:19:01.121 sys 0m0.089s 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:01.121 ************************************ 00:19:01.121 END TEST nvmf_discovery_remove_ifc 00:19:01.121 ************************************ 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:01.121 ************************************ 00:19:01.121 START TEST nvmf_identify_kernel_target 00:19:01.121 ************************************ 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:19:01.121 * Looking for test storage... 00:19:01.121 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:01.121 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.380 --rc genhtml_branch_coverage=1 00:19:01.380 --rc genhtml_function_coverage=1 00:19:01.380 --rc genhtml_legend=1 00:19:01.380 --rc geninfo_all_blocks=1 00:19:01.380 --rc geninfo_unexecuted_blocks=1 00:19:01.380 00:19:01.380 ' 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.380 --rc genhtml_branch_coverage=1 00:19:01.380 --rc genhtml_function_coverage=1 00:19:01.380 --rc genhtml_legend=1 00:19:01.380 --rc geninfo_all_blocks=1 00:19:01.380 --rc geninfo_unexecuted_blocks=1 00:19:01.380 00:19:01.380 ' 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.380 --rc genhtml_branch_coverage=1 00:19:01.380 --rc genhtml_function_coverage=1 00:19:01.380 --rc genhtml_legend=1 00:19:01.380 --rc geninfo_all_blocks=1 00:19:01.380 --rc geninfo_unexecuted_blocks=1 00:19:01.380 00:19:01.380 ' 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:01.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.380 --rc genhtml_branch_coverage=1 00:19:01.380 --rc genhtml_function_coverage=1 00:19:01.380 --rc genhtml_legend=1 00:19:01.380 --rc geninfo_all_blocks=1 00:19:01.380 --rc geninfo_unexecuted_blocks=1 00:19:01.380 00:19:01.380 ' 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:01.380 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:01.381 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:01.381 21:15:48 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:07.940 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:07.940 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:07.940 Found net devices under 0000:18:00.0: mlx_0_0 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:07.940 Found net devices under 0000:18:00.1: mlx_0_1 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:07.940 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:07.941 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.941 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:19:07.941 altname enp24s0f0np0 00:19:07.941 altname ens785f0np0 00:19:07.941 inet 192.168.100.8/24 scope global mlx_0_0 00:19:07.941 valid_lft forever preferred_lft forever 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:07.941 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:07.941 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:19:07.941 altname enp24s0f1np1 00:19:07.941 altname ens785f1np1 00:19:07.941 inet 192.168.100.9/24 scope global mlx_0_1 00:19:07.941 valid_lft forever preferred_lft forever 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:07.941 192.168.100.9' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:07.941 192.168.100.9' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:07.941 192.168.100.9' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:07.941 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:07.942 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:07.942 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:19:07.942 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:07.942 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:07.942 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:07.942 21:15:54 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:19:09.834 Waiting for block devices as requested 00:19:09.834 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:09.834 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:09.834 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:09.834 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:10.092 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:10.092 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:10.092 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:10.092 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:10.349 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:10.349 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:10.349 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:10.606 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:10.606 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:10.606 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:10.606 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:10.864 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:10.864 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:19:12.234 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:12.234 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:12.234 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:12.234 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:12.234 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:12.234 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:12.234 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:12.234 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:12.234 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:12.492 No valid GPT data, bailing 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:12.492 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:19:12.752 00:19:12.752 Discovery Log Number of Records 2, Generation counter 2 00:19:12.752 =====Discovery Log Entry 0====== 00:19:12.752 trtype: rdma 00:19:12.752 adrfam: ipv4 00:19:12.752 subtype: current discovery subsystem 00:19:12.752 treq: not specified, sq flow control disable supported 00:19:12.752 portid: 1 00:19:12.752 trsvcid: 4420 00:19:12.752 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:12.752 traddr: 192.168.100.8 00:19:12.752 eflags: none 00:19:12.752 rdma_prtype: not specified 00:19:12.752 rdma_qptype: connected 00:19:12.752 rdma_cms: rdma-cm 00:19:12.752 rdma_pkey: 0x0000 00:19:12.752 =====Discovery Log Entry 1====== 00:19:12.752 trtype: rdma 00:19:12.752 adrfam: ipv4 00:19:12.752 subtype: nvme subsystem 00:19:12.752 treq: not specified, sq flow control disable supported 00:19:12.752 portid: 1 00:19:12.752 trsvcid: 4420 00:19:12.752 subnqn: nqn.2016-06.io.spdk:testnqn 00:19:12.752 traddr: 192.168.100.8 00:19:12.752 eflags: none 00:19:12.752 rdma_prtype: not specified 00:19:12.752 rdma_qptype: connected 00:19:12.752 rdma_cms: rdma-cm 00:19:12.752 rdma_pkey: 0x0000 00:19:12.752 21:15:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:19:12.752 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:19:12.752 ===================================================== 00:19:12.752 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:19:12.752 ===================================================== 00:19:12.752 Controller Capabilities/Features 00:19:12.752 ================================ 00:19:12.752 Vendor ID: 0000 00:19:12.752 Subsystem Vendor ID: 0000 00:19:12.752 Serial Number: c3e856a5346f641d2371 00:19:12.752 Model Number: Linux 00:19:12.752 Firmware Version: 6.8.9-20 00:19:12.752 Recommended Arb Burst: 0 00:19:12.752 IEEE OUI Identifier: 00 00 00 00:19:12.752 Multi-path I/O 00:19:12.752 May have multiple subsystem ports: No 00:19:12.752 May have multiple controllers: No 00:19:12.752 Associated with SR-IOV VF: No 00:19:12.752 Max Data Transfer Size: Unlimited 00:19:12.752 Max Number of Namespaces: 0 00:19:12.752 Max Number of I/O Queues: 1024 00:19:12.752 NVMe Specification Version (VS): 1.3 00:19:12.752 NVMe Specification Version (Identify): 1.3 00:19:12.752 Maximum Queue Entries: 128 00:19:12.752 Contiguous Queues Required: No 00:19:12.752 Arbitration Mechanisms Supported 00:19:12.752 Weighted Round Robin: Not Supported 00:19:12.752 Vendor Specific: Not Supported 00:19:12.752 Reset Timeout: 7500 ms 00:19:12.752 Doorbell Stride: 4 bytes 00:19:12.752 NVM Subsystem Reset: Not Supported 00:19:12.752 Command Sets Supported 00:19:12.752 NVM Command Set: Supported 00:19:12.752 Boot Partition: Not Supported 00:19:12.752 Memory Page Size Minimum: 4096 bytes 00:19:12.752 Memory Page Size Maximum: 4096 bytes 00:19:12.752 Persistent Memory Region: Not Supported 00:19:12.752 Optional Asynchronous Events Supported 00:19:12.752 Namespace Attribute Notices: Not Supported 00:19:12.752 Firmware Activation Notices: Not Supported 00:19:12.752 ANA Change Notices: Not Supported 00:19:12.752 PLE Aggregate Log Change Notices: Not Supported 00:19:12.752 LBA Status Info Alert Notices: Not Supported 00:19:12.752 EGE Aggregate Log Change Notices: Not Supported 00:19:12.752 Normal NVM Subsystem Shutdown event: Not Supported 00:19:12.752 Zone Descriptor Change Notices: Not Supported 00:19:12.752 Discovery Log Change Notices: Supported 00:19:12.752 Controller Attributes 00:19:12.752 128-bit Host Identifier: Not Supported 00:19:12.752 Non-Operational Permissive Mode: Not Supported 00:19:12.752 NVM Sets: Not Supported 00:19:12.752 Read Recovery Levels: Not Supported 00:19:12.752 Endurance Groups: Not Supported 00:19:12.752 Predictable Latency Mode: Not Supported 00:19:12.752 Traffic Based Keep ALive: Not Supported 00:19:12.752 Namespace Granularity: Not Supported 00:19:12.752 SQ Associations: Not Supported 00:19:12.752 UUID List: Not Supported 00:19:12.752 Multi-Domain Subsystem: Not Supported 00:19:12.752 Fixed Capacity Management: Not Supported 00:19:12.752 Variable Capacity Management: Not Supported 00:19:12.752 Delete Endurance Group: Not Supported 00:19:12.752 Delete NVM Set: Not Supported 00:19:12.752 Extended LBA Formats Supported: Not Supported 00:19:12.752 Flexible Data Placement Supported: Not Supported 00:19:12.752 00:19:12.752 Controller Memory Buffer Support 00:19:12.752 ================================ 00:19:12.752 Supported: No 00:19:12.752 00:19:12.752 Persistent Memory Region Support 00:19:12.752 ================================ 00:19:12.752 Supported: No 00:19:12.752 00:19:12.752 Admin Command Set Attributes 00:19:12.752 ============================ 00:19:12.752 Security Send/Receive: Not Supported 00:19:12.752 Format NVM: Not Supported 00:19:12.752 Firmware Activate/Download: Not Supported 00:19:12.752 Namespace Management: Not Supported 00:19:12.752 Device Self-Test: Not Supported 00:19:12.752 Directives: Not Supported 00:19:12.752 NVMe-MI: Not Supported 00:19:12.752 Virtualization Management: Not Supported 00:19:12.752 Doorbell Buffer Config: Not Supported 00:19:12.752 Get LBA Status Capability: Not Supported 00:19:12.752 Command & Feature Lockdown Capability: Not Supported 00:19:12.752 Abort Command Limit: 1 00:19:12.752 Async Event Request Limit: 1 00:19:12.752 Number of Firmware Slots: N/A 00:19:12.752 Firmware Slot 1 Read-Only: N/A 00:19:12.752 Firmware Activation Without Reset: N/A 00:19:12.752 Multiple Update Detection Support: N/A 00:19:12.752 Firmware Update Granularity: No Information Provided 00:19:12.752 Per-Namespace SMART Log: No 00:19:12.752 Asymmetric Namespace Access Log Page: Not Supported 00:19:12.752 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:19:12.752 Command Effects Log Page: Not Supported 00:19:12.752 Get Log Page Extended Data: Supported 00:19:12.752 Telemetry Log Pages: Not Supported 00:19:12.752 Persistent Event Log Pages: Not Supported 00:19:12.752 Supported Log Pages Log Page: May Support 00:19:12.752 Commands Supported & Effects Log Page: Not Supported 00:19:12.752 Feature Identifiers & Effects Log Page:May Support 00:19:12.752 NVMe-MI Commands & Effects Log Page: May Support 00:19:12.752 Data Area 4 for Telemetry Log: Not Supported 00:19:12.752 Error Log Page Entries Supported: 1 00:19:12.752 Keep Alive: Not Supported 00:19:12.752 00:19:12.752 NVM Command Set Attributes 00:19:12.752 ========================== 00:19:12.752 Submission Queue Entry Size 00:19:12.752 Max: 1 00:19:12.752 Min: 1 00:19:12.752 Completion Queue Entry Size 00:19:12.752 Max: 1 00:19:12.752 Min: 1 00:19:12.752 Number of Namespaces: 0 00:19:12.752 Compare Command: Not Supported 00:19:12.752 Write Uncorrectable Command: Not Supported 00:19:12.752 Dataset Management Command: Not Supported 00:19:12.752 Write Zeroes Command: Not Supported 00:19:12.752 Set Features Save Field: Not Supported 00:19:12.752 Reservations: Not Supported 00:19:12.752 Timestamp: Not Supported 00:19:12.752 Copy: Not Supported 00:19:12.752 Volatile Write Cache: Not Present 00:19:12.752 Atomic Write Unit (Normal): 1 00:19:12.752 Atomic Write Unit (PFail): 1 00:19:12.752 Atomic Compare & Write Unit: 1 00:19:12.752 Fused Compare & Write: Not Supported 00:19:12.752 Scatter-Gather List 00:19:12.752 SGL Command Set: Supported 00:19:12.752 SGL Keyed: Supported 00:19:12.752 SGL Bit Bucket Descriptor: Not Supported 00:19:12.752 SGL Metadata Pointer: Not Supported 00:19:12.752 Oversized SGL: Not Supported 00:19:12.752 SGL Metadata Address: Not Supported 00:19:12.752 SGL Offset: Supported 00:19:12.752 Transport SGL Data Block: Not Supported 00:19:12.752 Replay Protected Memory Block: Not Supported 00:19:12.752 00:19:12.752 Firmware Slot Information 00:19:12.752 ========================= 00:19:12.752 Active slot: 0 00:19:12.752 00:19:12.752 00:19:12.752 Error Log 00:19:12.752 ========= 00:19:12.752 00:19:12.752 Active Namespaces 00:19:12.752 ================= 00:19:12.752 Discovery Log Page 00:19:12.752 ================== 00:19:12.752 Generation Counter: 2 00:19:12.752 Number of Records: 2 00:19:12.752 Record Format: 0 00:19:12.752 00:19:12.752 Discovery Log Entry 0 00:19:12.752 ---------------------- 00:19:12.752 Transport Type: 1 (RDMA) 00:19:12.752 Address Family: 1 (IPv4) 00:19:12.752 Subsystem Type: 3 (Current Discovery Subsystem) 00:19:12.752 Entry Flags: 00:19:12.752 Duplicate Returned Information: 0 00:19:12.752 Explicit Persistent Connection Support for Discovery: 0 00:19:12.752 Transport Requirements: 00:19:12.752 Secure Channel: Not Specified 00:19:12.752 Port ID: 1 (0x0001) 00:19:12.753 Controller ID: 65535 (0xffff) 00:19:12.753 Admin Max SQ Size: 32 00:19:12.753 Transport Service Identifier: 4420 00:19:12.753 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:19:12.753 Transport Address: 192.168.100.8 00:19:12.753 Transport Specific Address Subtype - RDMA 00:19:12.753 RDMA QP Service Type: 1 (Reliable Connected) 00:19:12.753 RDMA Provider Type: 1 (No provider specified) 00:19:12.753 RDMA CM Service: 1 (RDMA_CM) 00:19:12.753 Discovery Log Entry 1 00:19:12.753 ---------------------- 00:19:12.753 Transport Type: 1 (RDMA) 00:19:12.753 Address Family: 1 (IPv4) 00:19:12.753 Subsystem Type: 2 (NVM Subsystem) 00:19:12.753 Entry Flags: 00:19:12.753 Duplicate Returned Information: 0 00:19:12.753 Explicit Persistent Connection Support for Discovery: 0 00:19:12.753 Transport Requirements: 00:19:12.753 Secure Channel: Not Specified 00:19:12.753 Port ID: 1 (0x0001) 00:19:12.753 Controller ID: 65535 (0xffff) 00:19:12.753 Admin Max SQ Size: 32 00:19:12.753 Transport Service Identifier: 4420 00:19:12.753 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:19:12.753 Transport Address: 192.168.100.8 00:19:12.753 Transport Specific Address Subtype - RDMA 00:19:12.753 RDMA QP Service Type: 1 (Reliable Connected) 00:19:12.753 RDMA Provider Type: 1 (No provider specified) 00:19:12.753 RDMA CM Service: 1 (RDMA_CM) 00:19:12.753 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:19:12.753 get_feature(0x01) failed 00:19:12.753 get_feature(0x02) failed 00:19:12.753 get_feature(0x04) failed 00:19:12.753 ===================================================== 00:19:12.753 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:19:12.753 ===================================================== 00:19:12.753 Controller Capabilities/Features 00:19:12.753 ================================ 00:19:12.753 Vendor ID: 0000 00:19:12.753 Subsystem Vendor ID: 0000 00:19:12.753 Serial Number: 95a1f690e49fe5654511 00:19:12.753 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:19:12.753 Firmware Version: 6.8.9-20 00:19:12.753 Recommended Arb Burst: 6 00:19:12.753 IEEE OUI Identifier: 00 00 00 00:19:12.753 Multi-path I/O 00:19:12.753 May have multiple subsystem ports: Yes 00:19:12.753 May have multiple controllers: Yes 00:19:12.753 Associated with SR-IOV VF: No 00:19:12.753 Max Data Transfer Size: 1048576 00:19:12.753 Max Number of Namespaces: 1024 00:19:12.753 Max Number of I/O Queues: 128 00:19:12.753 NVMe Specification Version (VS): 1.3 00:19:12.753 NVMe Specification Version (Identify): 1.3 00:19:12.753 Maximum Queue Entries: 128 00:19:12.753 Contiguous Queues Required: No 00:19:12.753 Arbitration Mechanisms Supported 00:19:12.753 Weighted Round Robin: Not Supported 00:19:12.753 Vendor Specific: Not Supported 00:19:12.753 Reset Timeout: 7500 ms 00:19:12.753 Doorbell Stride: 4 bytes 00:19:12.753 NVM Subsystem Reset: Not Supported 00:19:12.753 Command Sets Supported 00:19:12.753 NVM Command Set: Supported 00:19:12.753 Boot Partition: Not Supported 00:19:12.753 Memory Page Size Minimum: 4096 bytes 00:19:12.753 Memory Page Size Maximum: 4096 bytes 00:19:12.753 Persistent Memory Region: Not Supported 00:19:12.753 Optional Asynchronous Events Supported 00:19:12.753 Namespace Attribute Notices: Supported 00:19:12.753 Firmware Activation Notices: Not Supported 00:19:12.753 ANA Change Notices: Supported 00:19:12.753 PLE Aggregate Log Change Notices: Not Supported 00:19:12.753 LBA Status Info Alert Notices: Not Supported 00:19:12.753 EGE Aggregate Log Change Notices: Not Supported 00:19:12.753 Normal NVM Subsystem Shutdown event: Not Supported 00:19:12.753 Zone Descriptor Change Notices: Not Supported 00:19:12.753 Discovery Log Change Notices: Not Supported 00:19:12.753 Controller Attributes 00:19:12.753 128-bit Host Identifier: Supported 00:19:12.753 Non-Operational Permissive Mode: Not Supported 00:19:12.753 NVM Sets: Not Supported 00:19:12.753 Read Recovery Levels: Not Supported 00:19:12.753 Endurance Groups: Not Supported 00:19:12.753 Predictable Latency Mode: Not Supported 00:19:12.753 Traffic Based Keep ALive: Supported 00:19:12.753 Namespace Granularity: Not Supported 00:19:12.753 SQ Associations: Not Supported 00:19:12.753 UUID List: Not Supported 00:19:12.753 Multi-Domain Subsystem: Not Supported 00:19:12.753 Fixed Capacity Management: Not Supported 00:19:12.753 Variable Capacity Management: Not Supported 00:19:12.753 Delete Endurance Group: Not Supported 00:19:12.753 Delete NVM Set: Not Supported 00:19:12.753 Extended LBA Formats Supported: Not Supported 00:19:12.753 Flexible Data Placement Supported: Not Supported 00:19:12.753 00:19:12.753 Controller Memory Buffer Support 00:19:12.753 ================================ 00:19:12.753 Supported: No 00:19:12.753 00:19:12.753 Persistent Memory Region Support 00:19:12.753 ================================ 00:19:12.753 Supported: No 00:19:12.753 00:19:12.753 Admin Command Set Attributes 00:19:12.753 ============================ 00:19:12.753 Security Send/Receive: Not Supported 00:19:12.753 Format NVM: Not Supported 00:19:12.753 Firmware Activate/Download: Not Supported 00:19:12.753 Namespace Management: Not Supported 00:19:12.753 Device Self-Test: Not Supported 00:19:12.753 Directives: Not Supported 00:19:12.753 NVMe-MI: Not Supported 00:19:12.753 Virtualization Management: Not Supported 00:19:12.753 Doorbell Buffer Config: Not Supported 00:19:12.753 Get LBA Status Capability: Not Supported 00:19:12.753 Command & Feature Lockdown Capability: Not Supported 00:19:12.753 Abort Command Limit: 4 00:19:12.753 Async Event Request Limit: 4 00:19:12.753 Number of Firmware Slots: N/A 00:19:12.753 Firmware Slot 1 Read-Only: N/A 00:19:12.753 Firmware Activation Without Reset: N/A 00:19:12.753 Multiple Update Detection Support: N/A 00:19:12.753 Firmware Update Granularity: No Information Provided 00:19:12.753 Per-Namespace SMART Log: Yes 00:19:12.753 Asymmetric Namespace Access Log Page: Supported 00:19:12.753 ANA Transition Time : 10 sec 00:19:12.753 00:19:12.753 Asymmetric Namespace Access Capabilities 00:19:12.753 ANA Optimized State : Supported 00:19:12.753 ANA Non-Optimized State : Supported 00:19:12.753 ANA Inaccessible State : Supported 00:19:12.753 ANA Persistent Loss State : Supported 00:19:12.753 ANA Change State : Supported 00:19:12.753 ANAGRPID is not changed : No 00:19:12.753 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:19:12.753 00:19:12.753 ANA Group Identifier Maximum : 128 00:19:12.753 Number of ANA Group Identifiers : 128 00:19:12.753 Max Number of Allowed Namespaces : 1024 00:19:12.753 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:19:12.753 Command Effects Log Page: Supported 00:19:12.753 Get Log Page Extended Data: Supported 00:19:12.753 Telemetry Log Pages: Not Supported 00:19:12.753 Persistent Event Log Pages: Not Supported 00:19:12.753 Supported Log Pages Log Page: May Support 00:19:12.753 Commands Supported & Effects Log Page: Not Supported 00:19:12.753 Feature Identifiers & Effects Log Page:May Support 00:19:12.753 NVMe-MI Commands & Effects Log Page: May Support 00:19:12.753 Data Area 4 for Telemetry Log: Not Supported 00:19:12.753 Error Log Page Entries Supported: 128 00:19:12.753 Keep Alive: Supported 00:19:12.753 Keep Alive Granularity: 1000 ms 00:19:12.753 00:19:12.753 NVM Command Set Attributes 00:19:12.753 ========================== 00:19:12.753 Submission Queue Entry Size 00:19:12.753 Max: 64 00:19:12.753 Min: 64 00:19:12.753 Completion Queue Entry Size 00:19:12.753 Max: 16 00:19:12.753 Min: 16 00:19:12.753 Number of Namespaces: 1024 00:19:12.753 Compare Command: Not Supported 00:19:12.753 Write Uncorrectable Command: Not Supported 00:19:12.753 Dataset Management Command: Supported 00:19:12.753 Write Zeroes Command: Supported 00:19:12.753 Set Features Save Field: Not Supported 00:19:12.753 Reservations: Not Supported 00:19:12.753 Timestamp: Not Supported 00:19:12.753 Copy: Not Supported 00:19:12.753 Volatile Write Cache: Present 00:19:12.753 Atomic Write Unit (Normal): 1 00:19:12.753 Atomic Write Unit (PFail): 1 00:19:12.753 Atomic Compare & Write Unit: 1 00:19:12.753 Fused Compare & Write: Not Supported 00:19:12.753 Scatter-Gather List 00:19:12.753 SGL Command Set: Supported 00:19:12.753 SGL Keyed: Supported 00:19:12.753 SGL Bit Bucket Descriptor: Not Supported 00:19:12.753 SGL Metadata Pointer: Not Supported 00:19:12.753 Oversized SGL: Not Supported 00:19:12.753 SGL Metadata Address: Not Supported 00:19:12.753 SGL Offset: Supported 00:19:12.753 Transport SGL Data Block: Not Supported 00:19:12.753 Replay Protected Memory Block: Not Supported 00:19:12.753 00:19:12.753 Firmware Slot Information 00:19:12.753 ========================= 00:19:12.753 Active slot: 0 00:19:12.753 00:19:12.753 Asymmetric Namespace Access 00:19:12.753 =========================== 00:19:12.753 Change Count : 0 00:19:12.754 Number of ANA Group Descriptors : 1 00:19:12.754 ANA Group Descriptor : 0 00:19:12.754 ANA Group ID : 1 00:19:12.754 Number of NSID Values : 1 00:19:12.754 Change Count : 0 00:19:12.754 ANA State : 1 00:19:12.754 Namespace Identifier : 1 00:19:12.754 00:19:12.754 Commands Supported and Effects 00:19:12.754 ============================== 00:19:12.754 Admin Commands 00:19:12.754 -------------- 00:19:12.754 Get Log Page (02h): Supported 00:19:12.754 Identify (06h): Supported 00:19:12.754 Abort (08h): Supported 00:19:12.754 Set Features (09h): Supported 00:19:12.754 Get Features (0Ah): Supported 00:19:12.754 Asynchronous Event Request (0Ch): Supported 00:19:12.754 Keep Alive (18h): Supported 00:19:12.754 I/O Commands 00:19:12.754 ------------ 00:19:12.754 Flush (00h): Supported 00:19:12.754 Write (01h): Supported LBA-Change 00:19:12.754 Read (02h): Supported 00:19:12.754 Write Zeroes (08h): Supported LBA-Change 00:19:12.754 Dataset Management (09h): Supported 00:19:12.754 00:19:12.754 Error Log 00:19:12.754 ========= 00:19:12.754 Entry: 0 00:19:12.754 Error Count: 0x3 00:19:12.754 Submission Queue Id: 0x0 00:19:12.754 Command Id: 0x5 00:19:12.754 Phase Bit: 0 00:19:12.754 Status Code: 0x2 00:19:12.754 Status Code Type: 0x0 00:19:12.754 Do Not Retry: 1 00:19:13.011 Error Location: 0x28 00:19:13.011 LBA: 0x0 00:19:13.011 Namespace: 0x0 00:19:13.011 Vendor Log Page: 0x0 00:19:13.011 ----------- 00:19:13.011 Entry: 1 00:19:13.011 Error Count: 0x2 00:19:13.011 Submission Queue Id: 0x0 00:19:13.011 Command Id: 0x5 00:19:13.011 Phase Bit: 0 00:19:13.011 Status Code: 0x2 00:19:13.011 Status Code Type: 0x0 00:19:13.011 Do Not Retry: 1 00:19:13.011 Error Location: 0x28 00:19:13.011 LBA: 0x0 00:19:13.011 Namespace: 0x0 00:19:13.011 Vendor Log Page: 0x0 00:19:13.011 ----------- 00:19:13.011 Entry: 2 00:19:13.011 Error Count: 0x1 00:19:13.011 Submission Queue Id: 0x0 00:19:13.011 Command Id: 0x0 00:19:13.011 Phase Bit: 0 00:19:13.011 Status Code: 0x2 00:19:13.011 Status Code Type: 0x0 00:19:13.011 Do Not Retry: 1 00:19:13.011 Error Location: 0x28 00:19:13.011 LBA: 0x0 00:19:13.011 Namespace: 0x0 00:19:13.011 Vendor Log Page: 0x0 00:19:13.011 00:19:13.011 Number of Queues 00:19:13.011 ================ 00:19:13.011 Number of I/O Submission Queues: 128 00:19:13.011 Number of I/O Completion Queues: 128 00:19:13.011 00:19:13.011 ZNS Specific Controller Data 00:19:13.011 ============================ 00:19:13.011 Zone Append Size Limit: 0 00:19:13.011 00:19:13.011 00:19:13.011 Active Namespaces 00:19:13.011 ================= 00:19:13.011 get_feature(0x05) failed 00:19:13.011 Namespace ID:1 00:19:13.011 Command Set Identifier: NVM (00h) 00:19:13.011 Deallocate: Supported 00:19:13.011 Deallocated/Unwritten Error: Not Supported 00:19:13.011 Deallocated Read Value: Unknown 00:19:13.011 Deallocate in Write Zeroes: Not Supported 00:19:13.011 Deallocated Guard Field: 0xFFFF 00:19:13.011 Flush: Supported 00:19:13.011 Reservation: Not Supported 00:19:13.011 Namespace Sharing Capabilities: Multiple Controllers 00:19:13.011 Size (in LBAs): 7814037168 (3726GiB) 00:19:13.011 Capacity (in LBAs): 7814037168 (3726GiB) 00:19:13.011 Utilization (in LBAs): 7814037168 (3726GiB) 00:19:13.011 UUID: e56b2270-8242-49dc-bf2f-983afbb9f30c 00:19:13.011 Thin Provisioning: Not Supported 00:19:13.011 Per-NS Atomic Units: Yes 00:19:13.011 Atomic Boundary Size (Normal): 0 00:19:13.011 Atomic Boundary Size (PFail): 0 00:19:13.011 Atomic Boundary Offset: 0 00:19:13.011 NGUID/EUI64 Never Reused: No 00:19:13.011 ANA group ID: 1 00:19:13.011 Namespace Write Protected: No 00:19:13.011 Number of LBA Formats: 1 00:19:13.011 Current LBA Format: LBA Format #00 00:19:13.011 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:13.011 00:19:13.011 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:19:13.011 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:13.011 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:19:13.011 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:13.011 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:13.011 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:19:13.011 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:13.011 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:13.011 rmmod nvme_rdma 00:19:13.011 rmmod nvme_fabrics 00:19:13.011 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:19:13.012 21:16:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:19:15.540 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:19:15.540 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:19:18.826 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:19:20.727 00:19:20.727 real 0m19.219s 00:19:20.727 user 0m4.880s 00:19:20.727 sys 0m10.103s 00:19:20.727 21:16:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.728 ************************************ 00:19:20.728 END TEST nvmf_identify_kernel_target 00:19:20.728 ************************************ 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:20.728 ************************************ 00:19:20.728 START TEST nvmf_auth_host 00:19:20.728 ************************************ 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:19:20.728 * Looking for test storage... 00:19:20.728 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:20.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.728 --rc genhtml_branch_coverage=1 00:19:20.728 --rc genhtml_function_coverage=1 00:19:20.728 --rc genhtml_legend=1 00:19:20.728 --rc geninfo_all_blocks=1 00:19:20.728 --rc geninfo_unexecuted_blocks=1 00:19:20.728 00:19:20.728 ' 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:20.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.728 --rc genhtml_branch_coverage=1 00:19:20.728 --rc genhtml_function_coverage=1 00:19:20.728 --rc genhtml_legend=1 00:19:20.728 --rc geninfo_all_blocks=1 00:19:20.728 --rc geninfo_unexecuted_blocks=1 00:19:20.728 00:19:20.728 ' 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:20.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.728 --rc genhtml_branch_coverage=1 00:19:20.728 --rc genhtml_function_coverage=1 00:19:20.728 --rc genhtml_legend=1 00:19:20.728 --rc geninfo_all_blocks=1 00:19:20.728 --rc geninfo_unexecuted_blocks=1 00:19:20.728 00:19:20.728 ' 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:20.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.728 --rc genhtml_branch_coverage=1 00:19:20.728 --rc genhtml_function_coverage=1 00:19:20.728 --rc genhtml_legend=1 00:19:20.728 --rc geninfo_all_blocks=1 00:19:20.728 --rc geninfo_unexecuted_blocks=1 00:19:20.728 00:19:20.728 ' 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.728 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.729 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:19:20.729 21:16:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:19:25.991 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:19:25.991 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:25.991 Found net devices under 0000:18:00.0: mlx_0_0 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:25.991 Found net devices under 0000:18:00.1: mlx_0_1 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:25.991 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:25.992 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:25.992 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:19:25.992 altname enp24s0f0np0 00:19:25.992 altname ens785f0np0 00:19:25.992 inet 192.168.100.8/24 scope global mlx_0_0 00:19:25.992 valid_lft forever preferred_lft forever 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:25.992 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:25.992 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:19:25.992 altname enp24s0f1np1 00:19:25.992 altname ens785f1np1 00:19:25.992 inet 192.168.100.9/24 scope global mlx_0_1 00:19:25.992 valid_lft forever preferred_lft forever 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:25.992 192.168.100.9' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:25.992 192.168.100.9' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:25.992 192.168.100.9' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3969465 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3969465 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3969465 ']' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:25.992 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=619adc070085a7f6ca3169ac6ad68e58 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.tgJ 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 619adc070085a7f6ca3169ac6ad68e58 0 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 619adc070085a7f6ca3169ac6ad68e58 0 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=619adc070085a7f6ca3169ac6ad68e58 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:26.250 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.tgJ 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.tgJ 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.tgJ 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ee3a6e5d2e9a154dfc04a2c36caa278e0292d767544a21448a15257a791eb7e4 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.YP9 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ee3a6e5d2e9a154dfc04a2c36caa278e0292d767544a21448a15257a791eb7e4 3 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ee3a6e5d2e9a154dfc04a2c36caa278e0292d767544a21448a15257a791eb7e4 3 00:19:26.507 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ee3a6e5d2e9a154dfc04a2c36caa278e0292d767544a21448a15257a791eb7e4 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.YP9 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.YP9 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.YP9 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0ec8fdd51ee1c33491bbc75c4e479002b0737ad7bcb820af 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.jSe 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0ec8fdd51ee1c33491bbc75c4e479002b0737ad7bcb820af 0 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0ec8fdd51ee1c33491bbc75c4e479002b0737ad7bcb820af 0 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0ec8fdd51ee1c33491bbc75c4e479002b0737ad7bcb820af 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.jSe 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.jSe 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.jSe 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f5f37876f3c1a69547f738363d477ac2587d92540de7c2c 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Rs9 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f5f37876f3c1a69547f738363d477ac2587d92540de7c2c 2 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f5f37876f3c1a69547f738363d477ac2587d92540de7c2c 2 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f5f37876f3c1a69547f738363d477ac2587d92540de7c2c 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Rs9 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Rs9 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Rs9 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=40d7f9d7a93f4a5d3b702d3962ed2949 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.eOe 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 40d7f9d7a93f4a5d3b702d3962ed2949 1 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 40d7f9d7a93f4a5d3b702d3962ed2949 1 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=40d7f9d7a93f4a5d3b702d3962ed2949 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.eOe 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.eOe 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.eOe 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e81c59253cba34f5c2de6233f57439ad 00:19:26.508 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.T5F 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e81c59253cba34f5c2de6233f57439ad 1 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e81c59253cba34f5c2de6233f57439ad 1 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e81c59253cba34f5c2de6233f57439ad 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.T5F 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.T5F 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.T5F 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=592f89c46dd43268e3c52349ee21fa5836d64231b6417fe5 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.IyQ 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 592f89c46dd43268e3c52349ee21fa5836d64231b6417fe5 2 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 592f89c46dd43268e3c52349ee21fa5836d64231b6417fe5 2 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=592f89c46dd43268e3c52349ee21fa5836d64231b6417fe5 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:19:26.765 21:16:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:26.765 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.IyQ 00:19:26.765 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.IyQ 00:19:26.765 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.IyQ 00:19:26.765 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b67d19429543ef09acd928f948c70e6f 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wgs 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b67d19429543ef09acd928f948c70e6f 0 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b67d19429543ef09acd928f948c70e6f 0 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b67d19429543ef09acd928f948c70e6f 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wgs 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wgs 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.wgs 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9a0b2a7ac272d4d48784addc27e20d4599911f228f082e5fb9cdf48257feb8ba 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.i6X 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9a0b2a7ac272d4d48784addc27e20d4599911f228f082e5fb9cdf48257feb8ba 3 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9a0b2a7ac272d4d48784addc27e20d4599911f228f082e5fb9cdf48257feb8ba 3 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9a0b2a7ac272d4d48784addc27e20d4599911f228f082e5fb9cdf48257feb8ba 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.i6X 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.i6X 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.i6X 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3969465 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3969465 ']' 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.766 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tgJ 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.YP9 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.YP9 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.jSe 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Rs9 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Rs9 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.eOe 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.T5F ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.T5F 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.IyQ 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.wgs ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.wgs 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.i6X 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:19:27.024 21:16:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:19:30.299 Waiting for block devices as requested 00:19:30.299 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:30.299 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:30.299 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:30.299 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:30.299 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:30.299 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:30.299 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:30.299 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:30.299 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:19:30.557 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:19:30.557 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:19:30.557 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:19:30.557 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:19:30.814 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:19:30.814 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:19:30.814 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:19:31.071 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:19:33.037 No valid GPT data, bailing 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 --hostid=00bafac1-9c9c-e711-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:19:33.037 00:19:33.037 Discovery Log Number of Records 2, Generation counter 2 00:19:33.037 =====Discovery Log Entry 0====== 00:19:33.037 trtype: rdma 00:19:33.037 adrfam: ipv4 00:19:33.037 subtype: current discovery subsystem 00:19:33.037 treq: not specified, sq flow control disable supported 00:19:33.037 portid: 1 00:19:33.037 trsvcid: 4420 00:19:33.037 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:33.037 traddr: 192.168.100.8 00:19:33.037 eflags: none 00:19:33.037 rdma_prtype: not specified 00:19:33.037 rdma_qptype: connected 00:19:33.037 rdma_cms: rdma-cm 00:19:33.037 rdma_pkey: 0x0000 00:19:33.037 =====Discovery Log Entry 1====== 00:19:33.037 trtype: rdma 00:19:33.037 adrfam: ipv4 00:19:33.037 subtype: nvme subsystem 00:19:33.037 treq: not specified, sq flow control disable supported 00:19:33.037 portid: 1 00:19:33.037 trsvcid: 4420 00:19:33.037 subnqn: nqn.2024-02.io.spdk:cnode0 00:19:33.037 traddr: 192.168.100.8 00:19:33.037 eflags: none 00:19:33.037 rdma_prtype: not specified 00:19:33.037 rdma_qptype: connected 00:19:33.037 rdma_cms: rdma-cm 00:19:33.037 rdma_pkey: 0x0000 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:19:33.037 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.038 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.296 nvme0n1 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.296 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.554 nvme0n1 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:33.554 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:33.555 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:33.555 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:33.555 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:33.555 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.555 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.555 21:16:20 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.813 nvme0n1 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.813 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.072 nvme0n1 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.072 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.330 nvme0n1 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:34.330 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.331 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.588 nvme0n1 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.589 21:16:21 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.589 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.846 nvme0n1 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:19:34.846 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.847 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.105 nvme0n1 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.105 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:35.362 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.363 nvme0n1 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.363 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:35.633 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.634 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.634 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:35.634 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:35.634 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:35.634 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:35.634 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:35.634 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:35.634 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.634 21:16:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.634 nvme0n1 00:19:35.634 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.634 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.634 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.634 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.634 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.634 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.923 nvme0n1 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:35.923 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.196 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.197 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.454 nvme0n1 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.454 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.455 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.455 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.455 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:36.455 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:36.455 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:36.455 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:36.455 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:36.455 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:36.455 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.455 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.712 nvme0n1 00:19:36.712 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.712 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.712 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.712 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.712 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.712 21:16:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.712 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.969 nvme0n1 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:19:36.969 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.970 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.227 nvme0n1 00:19:37.227 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.227 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.227 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.227 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.227 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.227 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.485 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.742 nvme0n1 00:19:37.742 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.742 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:37.742 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:37.742 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.742 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.742 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.742 21:16:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:37.742 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.743 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.000 nvme0n1 00:19:38.000 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.000 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.000 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.000 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.000 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.000 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:38.257 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:38.258 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.258 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.258 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.515 nvme0n1 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:38.515 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.516 21:16:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.081 nvme0n1 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.081 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.339 nvme0n1 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.339 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.596 21:16:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.853 nvme0n1 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.853 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.418 nvme0n1 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.418 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.419 21:16:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.983 nvme0n1 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.983 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.548 nvme0n1 00:19:41.548 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.548 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:41.548 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:41.548 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.548 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.548 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.548 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.548 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:41.548 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.549 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.806 21:16:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.806 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.369 nvme0n1 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:19:42.369 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.370 21:16:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.934 nvme0n1 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.934 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.192 nvme0n1 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.192 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.449 nvme0n1 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.450 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.708 nvme0n1 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:43.708 21:16:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:43.708 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:43.708 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:43.708 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.708 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.966 nvme0n1 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.966 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.224 nvme0n1 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.224 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.225 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.482 nvme0n1 00:19:44.482 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.482 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.483 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.741 nvme0n1 00:19:44.741 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.741 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.741 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.741 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.741 21:16:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.741 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.999 nvme0n1 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.999 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.256 nvme0n1 00:19:45.256 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.256 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.256 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.256 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.256 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.256 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.257 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.514 nvme0n1 00:19:45.514 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.514 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.514 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.514 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.514 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.514 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.514 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.515 21:16:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.772 nvme0n1 00:19:45.772 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.772 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:45.772 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:45.772 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.772 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.772 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.772 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.772 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:45.772 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.772 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:46.029 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:46.030 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.030 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.030 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.287 nvme0n1 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.287 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.545 nvme0n1 00:19:46.545 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.546 21:16:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.804 nvme0n1 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:46.804 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.805 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.062 nvme0n1 00:19:47.062 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.062 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.062 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.062 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.062 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.062 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.320 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.578 nvme0n1 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.578 21:16:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.578 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.143 nvme0n1 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.143 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.401 nvme0n1 00:19:48.401 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.401 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.401 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.401 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.401 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.401 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.658 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.659 21:16:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.916 nvme0n1 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.916 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.479 nvme0n1 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.479 21:16:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.041 nvme0n1 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.041 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.042 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:50.042 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:50.042 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:50.042 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:50.042 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:50.042 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.042 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.042 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.603 nvme0n1 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:50.603 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.604 21:16:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.166 nvme0n1 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.166 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.167 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.167 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.167 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.167 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.167 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:51.167 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:51.167 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:51.167 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:51.167 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:51.423 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:51.423 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.423 21:16:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.990 nvme0n1 00:19:51.990 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.990 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:51.990 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:51.990 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.990 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.990 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.990 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.990 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:51.990 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.990 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:51.991 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.992 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.559 nvme0n1 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.559 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.560 21:16:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.817 nvme0n1 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.817 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.075 nvme0n1 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.075 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 nvme0n1 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.333 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.590 nvme0n1 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.591 21:16:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.849 nvme0n1 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:53.849 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.850 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.107 nvme0n1 00:19:54.107 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.107 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.107 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.108 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.365 nvme0n1 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.366 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.624 nvme0n1 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.624 21:16:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.883 nvme0n1 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.883 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.141 nvme0n1 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.141 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:55.142 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:55.142 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:55.142 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:55.142 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:55.142 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.142 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.142 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.399 nvme0n1 00:19:55.399 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.399 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.399 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.399 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.399 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.399 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.399 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.399 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.399 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.399 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.656 21:16:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.913 nvme0n1 00:19:55.913 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.913 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.914 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.172 nvme0n1 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.172 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.430 nvme0n1 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.430 21:16:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.687 nvme0n1 00:19:56.687 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.687 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:56.687 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:56.687 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.687 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.687 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.688 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.945 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.203 nvme0n1 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.203 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.768 nvme0n1 00:19:57.769 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.769 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:57.769 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:57.769 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.769 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.769 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.769 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.769 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:57.769 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.769 21:16:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.769 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.026 nvme0n1 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.026 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.591 nvme0n1 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:58.591 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:58.592 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:58.592 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:58.592 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:58.592 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:58.592 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:58.592 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:19:58.592 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.592 21:16:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.850 nvme0n1 00:19:58.850 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.850 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:58.850 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:58.850 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.850 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:58.850 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.850 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.850 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:58.850 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.850 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjE5YWRjMDcwMDg1YTdmNmNhMzE2OWFjNmFkNjhlNThULDcL: 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: ]] 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZWUzYTZlNWQyZTlhMTU0ZGZjMDRhMmMzNmNhYTI3OGUwMjkyZDc2NzU0NGEyMTQ0OGExNTI1N2E3OTFlYjdlNJARSq8=: 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.108 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.674 nvme0n1 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.674 21:16:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.241 nvme0n1 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.241 21:16:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.807 nvme0n1 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyZjg5YzQ2ZGQ0MzI2OGUzYzUyMzQ5ZWUyMWZhNTgzNmQ2NDIzMWI2NDE3ZmU1dFecFQ==: 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: ]] 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjY3ZDE5NDI5NTQzZWYwOWFjZDkyOGY5NDhjNzBlNmZrvwdf: 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.807 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.372 nvme0n1 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OWEwYjJhN2FjMjcyZDRkNDg3ODRhZGRjMjdlMjBkNDU5OTkxMWYyMjhmMDgyZTVmYjljZGY0ODI1N2ZlYjhiYU72GaQ=: 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:01.372 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:01.373 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:01.373 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.373 21:16:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.938 nvme0n1 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.938 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.194 request: 00:20:02.194 { 00:20:02.194 "name": "nvme0", 00:20:02.194 "trtype": "rdma", 00:20:02.194 "traddr": "192.168.100.8", 00:20:02.194 "adrfam": "ipv4", 00:20:02.194 "trsvcid": "4420", 00:20:02.194 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:02.194 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:02.194 "prchk_reftag": false, 00:20:02.194 "prchk_guard": false, 00:20:02.194 "hdgst": false, 00:20:02.194 "ddgst": false, 00:20:02.194 "allow_unrecognized_csi": false, 00:20:02.194 "method": "bdev_nvme_attach_controller", 00:20:02.194 "req_id": 1 00:20:02.194 } 00:20:02.194 Got JSON-RPC error response 00:20:02.194 response: 00:20:02.194 { 00:20:02.194 "code": -5, 00:20:02.194 "message": "Input/output error" 00:20:02.194 } 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:02.194 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.195 request: 00:20:02.195 { 00:20:02.195 "name": "nvme0", 00:20:02.195 "trtype": "rdma", 00:20:02.195 "traddr": "192.168.100.8", 00:20:02.195 "adrfam": "ipv4", 00:20:02.195 "trsvcid": "4420", 00:20:02.195 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:02.195 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:02.195 "prchk_reftag": false, 00:20:02.195 "prchk_guard": false, 00:20:02.195 "hdgst": false, 00:20:02.195 "ddgst": false, 00:20:02.195 "dhchap_key": "key2", 00:20:02.195 "allow_unrecognized_csi": false, 00:20:02.195 "method": "bdev_nvme_attach_controller", 00:20:02.195 "req_id": 1 00:20:02.195 } 00:20:02.195 Got JSON-RPC error response 00:20:02.195 response: 00:20:02.195 { 00:20:02.195 "code": -5, 00:20:02.195 "message": "Input/output error" 00:20:02.195 } 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.195 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.452 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:20:02.452 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:20:02.452 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.452 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.452 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.452 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.453 request: 00:20:02.453 { 00:20:02.453 "name": "nvme0", 00:20:02.453 "trtype": "rdma", 00:20:02.453 "traddr": "192.168.100.8", 00:20:02.453 "adrfam": "ipv4", 00:20:02.453 "trsvcid": "4420", 00:20:02.453 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:02.453 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:02.453 "prchk_reftag": false, 00:20:02.453 "prchk_guard": false, 00:20:02.453 "hdgst": false, 00:20:02.453 "ddgst": false, 00:20:02.453 "dhchap_key": "key1", 00:20:02.453 "dhchap_ctrlr_key": "ckey2", 00:20:02.453 "allow_unrecognized_csi": false, 00:20:02.453 "method": "bdev_nvme_attach_controller", 00:20:02.453 "req_id": 1 00:20:02.453 } 00:20:02.453 Got JSON-RPC error response 00:20:02.453 response: 00:20:02.453 { 00:20:02.453 "code": -5, 00:20:02.453 "message": "Input/output error" 00:20:02.453 } 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.453 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.710 nvme0n1 00:20:02.710 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.711 21:16:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.711 request: 00:20:02.711 { 00:20:02.711 "name": "nvme0", 00:20:02.711 "dhchap_key": "key1", 00:20:02.711 "dhchap_ctrlr_key": "ckey2", 00:20:02.711 "method": "bdev_nvme_set_keys", 00:20:02.711 "req_id": 1 00:20:02.711 } 00:20:02.711 Got JSON-RPC error response 00:20:02.711 response: 00:20:02.711 { 00:20:02.711 "code": -13, 00:20:02.711 "message": "Permission denied" 00:20:02.711 } 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:02.711 21:16:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:03.641 21:16:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:03.641 21:16:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.641 21:16:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.641 21:16:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:03.899 21:16:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.899 21:16:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:03.899 21:16:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:04.831 21:16:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:04.832 21:16:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:04.832 21:16:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.832 21:16:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:04.832 21:16:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.832 21:16:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:20:04.832 21:16:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:05.765 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGVjOGZkZDUxZWUxYzMzNDkxYmJjNzVjNGU0NzkwMDJiMDczN2FkN2JjYjgyMGFml0al9g==: 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: ]] 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:M2Y1ZjM3ODc2ZjNjMWE2OTU0N2Y3MzgzNjNkNDc3YWMyNTg3ZDkyNTQwZGU3YzJjfQwiDg==: 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.024 nvme0n1 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDBkN2Y5ZDdhOTNmNGE1ZDNiNzAyZDM5NjJlZDI5NDnlbEy3: 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: ]] 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTgxYzU5MjUzY2JhMzRmNWMyZGU2MjMzZjU3NDM5YWSthyAk: 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.024 request: 00:20:06.024 { 00:20:06.024 "name": "nvme0", 00:20:06.024 "dhchap_key": "key2", 00:20:06.024 "dhchap_ctrlr_key": "ckey1", 00:20:06.024 "method": "bdev_nvme_set_keys", 00:20:06.024 "req_id": 1 00:20:06.024 } 00:20:06.024 Got JSON-RPC error response 00:20:06.024 response: 00:20:06.024 { 00:20:06.024 "code": -13, 00:20:06.024 "message": "Permission denied" 00:20:06.024 } 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:06.024 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.282 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:06.282 21:16:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:07.214 21:16:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:07.214 21:16:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:07.214 21:16:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.214 21:16:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:07.214 21:16:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.214 21:16:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:07.214 21:16:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:08.146 21:16:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:08.146 21:16:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:08.146 21:16:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.146 21:16:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:08.146 21:16:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.146 21:16:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:20:08.146 21:16:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:09.518 rmmod nvme_rdma 00:20:09.518 rmmod nvme_fabrics 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3969465 ']' 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3969465 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3969465 ']' 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3969465 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3969465 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3969465' 00:20:09.518 killing process with pid 3969465 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3969465 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3969465 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:20:09.518 21:16:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:12.797 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:12.797 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:16.076 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:20:17.519 21:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.tgJ /tmp/spdk.key-null.jSe /tmp/spdk.key-sha256.eOe /tmp/spdk.key-sha384.IyQ /tmp/spdk.key-sha512.i6X /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:20:17.519 21:17:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:20.042 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:20:20.042 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:21.417 00:20:21.417 real 1m0.673s 00:20:21.417 user 0m48.410s 00:20:21.417 sys 0m14.765s 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.417 ************************************ 00:20:21.417 END TEST nvmf_auth_host 00:20:21.417 ************************************ 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.417 ************************************ 00:20:21.417 START TEST nvmf_bdevperf 00:20:21.417 ************************************ 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:20:21.417 * Looking for test storage... 00:20:21.417 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:21.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.417 --rc genhtml_branch_coverage=1 00:20:21.417 --rc genhtml_function_coverage=1 00:20:21.417 --rc genhtml_legend=1 00:20:21.417 --rc geninfo_all_blocks=1 00:20:21.417 --rc geninfo_unexecuted_blocks=1 00:20:21.417 00:20:21.417 ' 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:21.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.417 --rc genhtml_branch_coverage=1 00:20:21.417 --rc genhtml_function_coverage=1 00:20:21.417 --rc genhtml_legend=1 00:20:21.417 --rc geninfo_all_blocks=1 00:20:21.417 --rc geninfo_unexecuted_blocks=1 00:20:21.417 00:20:21.417 ' 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:21.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.417 --rc genhtml_branch_coverage=1 00:20:21.417 --rc genhtml_function_coverage=1 00:20:21.417 --rc genhtml_legend=1 00:20:21.417 --rc geninfo_all_blocks=1 00:20:21.417 --rc geninfo_unexecuted_blocks=1 00:20:21.417 00:20:21.417 ' 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:21.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.417 --rc genhtml_branch_coverage=1 00:20:21.417 --rc genhtml_function_coverage=1 00:20:21.417 --rc genhtml_legend=1 00:20:21.417 --rc geninfo_all_blocks=1 00:20:21.417 --rc geninfo_unexecuted_blocks=1 00:20:21.417 00:20:21.417 ' 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.417 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.418 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:21.418 21:17:08 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:26.686 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:26.686 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:26.687 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:26.687 Found net devices under 0000:18:00.0: mlx_0_0 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:26.687 Found net devices under 0000:18:00.1: mlx_0_1 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:26.687 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:26.947 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:26.948 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:26.948 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:20:26.948 altname enp24s0f0np0 00:20:26.948 altname ens785f0np0 00:20:26.948 inet 192.168.100.8/24 scope global mlx_0_0 00:20:26.948 valid_lft forever preferred_lft forever 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:26.948 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:26.948 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:20:26.948 altname enp24s0f1np1 00:20:26.948 altname ens785f1np1 00:20:26.948 inet 192.168.100.9/24 scope global mlx_0_1 00:20:26.948 valid_lft forever preferred_lft forever 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:26.948 192.168.100.9' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:26.948 192.168.100.9' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:26.948 192.168.100.9' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3985076 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3985076 00:20:26.948 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:26.949 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3985076 ']' 00:20:26.949 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.949 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.949 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.949 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.949 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:26.949 [2024-11-26 21:17:14.337638] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:20:26.949 [2024-11-26 21:17:14.337678] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.208 [2024-11-26 21:17:14.395882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:27.208 [2024-11-26 21:17:14.434896] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.208 [2024-11-26 21:17:14.434930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.208 [2024-11-26 21:17:14.434936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.208 [2024-11-26 21:17:14.434941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.208 [2024-11-26 21:17:14.434946] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.208 [2024-11-26 21:17:14.436195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.208 [2024-11-26 21:17:14.436279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:27.208 [2024-11-26 21:17:14.436281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.208 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.208 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:27.208 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:27.208 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.208 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:27.208 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.208 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:27.208 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.208 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:27.208 [2024-11-26 21:17:14.584731] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1540cf0/0x15451e0) succeed. 00:20:27.208 [2024-11-26 21:17:14.592836] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15422e0/0x1586880) succeed. 00:20:27.467 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.467 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:27.468 Malloc0 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:27.468 [2024-11-26 21:17:14.731720] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:27.468 { 00:20:27.468 "params": { 00:20:27.468 "name": "Nvme$subsystem", 00:20:27.468 "trtype": "$TEST_TRANSPORT", 00:20:27.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:27.468 "adrfam": "ipv4", 00:20:27.468 "trsvcid": "$NVMF_PORT", 00:20:27.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:27.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:27.468 "hdgst": ${hdgst:-false}, 00:20:27.468 "ddgst": ${ddgst:-false} 00:20:27.468 }, 00:20:27.468 "method": "bdev_nvme_attach_controller" 00:20:27.468 } 00:20:27.468 EOF 00:20:27.468 )") 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:20:27.468 21:17:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:27.468 "params": { 00:20:27.468 "name": "Nvme1", 00:20:27.468 "trtype": "rdma", 00:20:27.468 "traddr": "192.168.100.8", 00:20:27.468 "adrfam": "ipv4", 00:20:27.468 "trsvcid": "4420", 00:20:27.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:27.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:27.468 "hdgst": false, 00:20:27.468 "ddgst": false 00:20:27.468 }, 00:20:27.468 "method": "bdev_nvme_attach_controller" 00:20:27.468 }' 00:20:27.468 [2024-11-26 21:17:14.780512] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:20:27.468 [2024-11-26 21:17:14.780550] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985101 ] 00:20:27.468 [2024-11-26 21:17:14.838632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.468 [2024-11-26 21:17:14.876324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.726 Running I/O for 1 seconds... 00:20:28.660 19456.00 IOPS, 76.00 MiB/s 00:20:28.660 Latency(us) 00:20:28.660 [2024-11-26T20:17:16.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.660 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:28.660 Verification LBA range: start 0x0 length 0x4000 00:20:28.660 Nvme1n1 : 1.00 19495.40 76.15 0.00 0.00 6530.87 138.81 11456.66 00:20:28.660 [2024-11-26T20:17:16.096Z] =================================================================================================================== 00:20:28.660 [2024-11-26T20:17:16.096Z] Total : 19495.40 76.15 0.00 0.00 6530.87 138.81 11456.66 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3985373 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:28.919 { 00:20:28.919 "params": { 00:20:28.919 "name": "Nvme$subsystem", 00:20:28.919 "trtype": "$TEST_TRANSPORT", 00:20:28.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.919 "adrfam": "ipv4", 00:20:28.919 "trsvcid": "$NVMF_PORT", 00:20:28.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.919 "hdgst": ${hdgst:-false}, 00:20:28.919 "ddgst": ${ddgst:-false} 00:20:28.919 }, 00:20:28.919 "method": "bdev_nvme_attach_controller" 00:20:28.919 } 00:20:28.919 EOF 00:20:28.919 )") 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:20:28.919 21:17:16 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:28.919 "params": { 00:20:28.919 "name": "Nvme1", 00:20:28.919 "trtype": "rdma", 00:20:28.919 "traddr": "192.168.100.8", 00:20:28.919 "adrfam": "ipv4", 00:20:28.919 "trsvcid": "4420", 00:20:28.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.919 "hdgst": false, 00:20:28.919 "ddgst": false 00:20:28.919 }, 00:20:28.919 "method": "bdev_nvme_attach_controller" 00:20:28.919 }' 00:20:28.919 [2024-11-26 21:17:16.278847] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:20:28.919 [2024-11-26 21:17:16.278896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3985373 ] 00:20:28.919 [2024-11-26 21:17:16.337968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.178 [2024-11-26 21:17:16.373141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.178 Running I/O for 15 seconds... 00:20:31.121 19428.00 IOPS, 75.89 MiB/s [2024-11-26T20:17:19.491Z] 19520.00 IOPS, 76.25 MiB/s [2024-11-26T20:17:19.492Z] 21:17:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3985076 00:20:32.056 21:17:19 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:20:32.884 17493.33 IOPS, 68.33 MiB/s [2024-11-26T20:17:20.320Z] [2024-11-26 21:17:20.265928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:27464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.265968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.265986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:27472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.265993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.266002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.266012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.266021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.266027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.266035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:27496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.266041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.266049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.266055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.266062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.266068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.266076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:27520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.266083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.266090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.266096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.266104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:27536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.266110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.266117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.266123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.266131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:27552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.266137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.884 [2024-11-26 21:17:20.266145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.884 [2024-11-26 21:17:20.266151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:27568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.885 [2024-11-26 21:17:20.266165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.885 [2024-11-26 21:17:20.266181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:27584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.885 [2024-11-26 21:17:20.266194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:27592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.885 [2024-11-26 21:17:20.266208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:27600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.885 [2024-11-26 21:17:20.266222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:27608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.885 [2024-11-26 21:17:20.266236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.885 [2024-11-26 21:17:20.266250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.885 [2024-11-26 21:17:20.266269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:27632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.885 [2024-11-26 21:17:20.266283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:27640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:32.885 [2024-11-26 21:17:20.266297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:26624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:26632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:26656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:26664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:26696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:26704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:26744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:26752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:26760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:26792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:26808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181200 00:20:32.885 [2024-11-26 21:17:20.266655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.885 [2024-11-26 21:17:20.266663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:26904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:26928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:26960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.266988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.266994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.267009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.267023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.267037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.267051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.267065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.267079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.267092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.267106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.267121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.267134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181200 00:20:32.886 [2024-11-26 21:17:20.267149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.886 [2024-11-26 21:17:20.267156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:27264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.887 [2024-11-26 21:17:20.267622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181200 00:20:32.887 [2024-11-26 21:17:20.267628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.275722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181200 00:20:32.888 [2024-11-26 21:17:20.275734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.275743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:27384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181200 00:20:32.888 [2024-11-26 21:17:20.275750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.275757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181200 00:20:32.888 [2024-11-26 21:17:20.275764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.275772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x181200 00:20:32.888 [2024-11-26 21:17:20.275778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.275785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181200 00:20:32.888 [2024-11-26 21:17:20.275791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.275799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181200 00:20:32.888 [2024-11-26 21:17:20.275805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.275812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181200 00:20:32.888 [2024-11-26 21:17:20.275818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.275826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181200 00:20:32.888 [2024-11-26 21:17:20.275833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.275840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x181200 00:20:32.888 [2024-11-26 21:17:20.275846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.275854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181200 00:20:32.888 [2024-11-26 21:17:20.275860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dc1c3000 sqhd:7250 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.277744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:20:32.888 [2024-11-26 21:17:20.277760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:20:32.888 [2024-11-26 21:17:20.277769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27456 len:8 PRP1 0x0 PRP2 0x0 00:20:32.888 [2024-11-26 21:17:20.277778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.277849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.888 [2024-11-26 21:17:20.277863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:e87db0 sqhd:03d0 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.277873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.888 [2024-11-26 21:17:20.277881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:e87db0 sqhd:03d0 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.277891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.888 [2024-11-26 21:17:20.277899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:e87db0 sqhd:03d0 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.277908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:32.888 [2024-11-26 21:17:20.277916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:e87db0 sqhd:03d0 p:0 m:0 dnr:0 00:20:32.888 [2024-11-26 21:17:20.294149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:32.888 [2024-11-26 21:17:20.294194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:32.888 [2024-11-26 21:17:20.294219] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Unable to perform failover, already in progress. 00:20:32.888 [2024-11-26 21:17:20.297018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:32.888 [2024-11-26 21:17:20.299895] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:32.888 [2024-11-26 21:17:20.299912] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:32.888 [2024-11-26 21:17:20.299918] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:20:34.083 13120.00 IOPS, 51.25 MiB/s [2024-11-26T20:17:21.519Z] [2024-11-26 21:17:21.303865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:34.083 [2024-11-26 21:17:21.303918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:34.083 [2024-11-26 21:17:21.304511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:34.083 [2024-11-26 21:17:21.304570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:34.083 [2024-11-26 21:17:21.304592] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:20:34.083 [2024-11-26 21:17:21.304626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:34.083 [2024-11-26 21:17:21.310128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:34.083 [2024-11-26 21:17:21.313019] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:34.083 [2024-11-26 21:17:21.313037] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:34.083 [2024-11-26 21:17:21.313043] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:20:34.909 10496.00 IOPS, 41.00 MiB/s [2024-11-26T20:17:22.345Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3985076 Killed "${NVMF_APP[@]}" "$@" 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3986424 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3986424 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3986424 ']' 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.909 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.910 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:34.910 [2024-11-26 21:17:22.298775] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:20:34.910 [2024-11-26 21:17:22.298815] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.910 [2024-11-26 21:17:22.316924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:34.910 [2024-11-26 21:17:22.316947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:34.910 [2024-11-26 21:17:22.317109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:34.910 [2024-11-26 21:17:22.317118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:34.910 [2024-11-26 21:17:22.317125] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:20:34.910 [2024-11-26 21:17:22.317133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:34.910 [2024-11-26 21:17:22.324580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:34.910 [2024-11-26 21:17:22.326958] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:34.910 [2024-11-26 21:17:22.326977] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:34.910 [2024-11-26 21:17:22.326983] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed040 00:20:35.169 [2024-11-26 21:17:22.359088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:35.169 [2024-11-26 21:17:22.397917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.169 [2024-11-26 21:17:22.397950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.169 [2024-11-26 21:17:22.397956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.169 [2024-11-26 21:17:22.397962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.169 [2024-11-26 21:17:22.397967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.169 [2024-11-26 21:17:22.399107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.169 [2024-11-26 21:17:22.399178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.169 [2024-11-26 21:17:22.399179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.169 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.169 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:20:35.169 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.169 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.169 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:35.169 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.169 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:35.169 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.169 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:35.169 [2024-11-26 21:17:22.548847] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2207cf0/0x220c1e0) succeed. 00:20:35.169 [2024-11-26 21:17:22.557275] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22092e0/0x224d880) succeed. 00:20:35.428 8746.67 IOPS, 34.17 MiB/s [2024-11-26T20:17:22.864Z] 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:35.428 Malloc0 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:35.428 [2024-11-26 21:17:22.692828] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.428 21:17:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3985373 00:20:35.994 [2024-11-26 21:17:23.331152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:20:35.994 [2024-11-26 21:17:23.331178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:20:35.994 [2024-11-26 21:17:23.331344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:20:35.994 [2024-11-26 21:17:23.331352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:20:35.994 [2024-11-26 21:17:23.331359] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:20:35.994 [2024-11-26 21:17:23.331371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:20:35.994 [2024-11-26 21:17:23.334348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:20:35.994 [2024-11-26 21:17:23.377061] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:20:37.184 8008.00 IOPS, 31.28 MiB/s [2024-11-26T20:17:25.992Z] 9452.00 IOPS, 36.92 MiB/s [2024-11-26T20:17:26.925Z] 10577.11 IOPS, 41.32 MiB/s [2024-11-26T20:17:27.857Z] 11472.20 IOPS, 44.81 MiB/s [2024-11-26T20:17:28.788Z] 12205.82 IOPS, 47.68 MiB/s [2024-11-26T20:17:29.719Z] 12818.33 IOPS, 50.07 MiB/s [2024-11-26T20:17:30.652Z] 13336.23 IOPS, 52.09 MiB/s [2024-11-26T20:17:32.022Z] 13777.14 IOPS, 53.82 MiB/s [2024-11-26T20:17:32.022Z] 14162.73 IOPS, 55.32 MiB/s 00:20:44.586 Latency(us) 00:20:44.586 [2024-11-26T20:17:32.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.586 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:44.586 Verification LBA range: start 0x0 length 0x4000 00:20:44.586 Nvme1n1 : 15.00 14163.04 55.32 11192.73 0.00 5029.77 321.61 1056343.23 00:20:44.586 [2024-11-26T20:17:32.022Z] =================================================================================================================== 00:20:44.586 [2024-11-26T20:17:32.022Z] Total : 14163.04 55.32 11192.73 0.00 5029.77 321.61 1056343.23 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:44.586 rmmod nvme_rdma 00:20:44.586 rmmod nvme_fabrics 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3986424 ']' 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3986424 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3986424 ']' 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3986424 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3986424 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3986424' 00:20:44.586 killing process with pid 3986424 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3986424 00:20:44.586 21:17:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3986424 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:44.844 00:20:44.844 real 0m23.612s 00:20:44.844 user 1m1.783s 00:20:44.844 sys 0m5.159s 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:20:44.844 ************************************ 00:20:44.844 END TEST nvmf_bdevperf 00:20:44.844 ************************************ 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.844 ************************************ 00:20:44.844 START TEST nvmf_target_disconnect 00:20:44.844 ************************************ 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:20:44.844 * Looking for test storage... 00:20:44.844 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:20:44.844 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.102 --rc genhtml_branch_coverage=1 00:20:45.102 --rc genhtml_function_coverage=1 00:20:45.102 --rc genhtml_legend=1 00:20:45.102 --rc geninfo_all_blocks=1 00:20:45.102 --rc geninfo_unexecuted_blocks=1 00:20:45.102 00:20:45.102 ' 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.102 --rc genhtml_branch_coverage=1 00:20:45.102 --rc genhtml_function_coverage=1 00:20:45.102 --rc genhtml_legend=1 00:20:45.102 --rc geninfo_all_blocks=1 00:20:45.102 --rc geninfo_unexecuted_blocks=1 00:20:45.102 00:20:45.102 ' 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.102 --rc genhtml_branch_coverage=1 00:20:45.102 --rc genhtml_function_coverage=1 00:20:45.102 --rc genhtml_legend=1 00:20:45.102 --rc geninfo_all_blocks=1 00:20:45.102 --rc geninfo_unexecuted_blocks=1 00:20:45.102 00:20:45.102 ' 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:45.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:45.102 --rc genhtml_branch_coverage=1 00:20:45.102 --rc genhtml_function_coverage=1 00:20:45.102 --rc genhtml_legend=1 00:20:45.102 --rc geninfo_all_blocks=1 00:20:45.102 --rc geninfo_unexecuted_blocks=1 00:20:45.102 00:20:45.102 ' 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:45.102 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:45.103 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:20:45.103 21:17:32 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:20:50.364 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:20:50.364 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.364 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:50.364 Found net devices under 0000:18:00.0: mlx_0_0 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:50.365 Found net devices under 0000:18:00.1: mlx_0_1 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:50.365 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:50.365 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:20:50.365 altname enp24s0f0np0 00:20:50.365 altname ens785f0np0 00:20:50.365 inet 192.168.100.8/24 scope global mlx_0_0 00:20:50.365 valid_lft forever preferred_lft forever 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:50.365 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:50.365 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:20:50.365 altname enp24s0f1np1 00:20:50.365 altname ens785f1np1 00:20:50.365 inet 192.168.100.9/24 scope global mlx_0_1 00:20:50.365 valid_lft forever preferred_lft forever 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:50.365 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:50.366 192.168.100.9' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:50.366 192.168.100.9' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:50.366 192.168.100.9' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:50.366 ************************************ 00:20:50.366 START TEST nvmf_target_disconnect_tc1 00:20:50.366 ************************************ 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:20:50.366 21:17:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:50.366 [2024-11-26 21:17:37.682310] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:20:50.366 [2024-11-26 21:17:37.682341] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:20:50.366 [2024-11-26 21:17:37.682349] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:20:51.301 [2024-11-26 21:17:38.686147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:20:51.301 [2024-11-26 21:17:38.686210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:20:51.301 [2024-11-26 21:17:38.686236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:20:51.301 [2024-11-26 21:17:38.686295] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:20:51.301 [2024-11-26 21:17:38.686318] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:20:51.301 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:20:51.301 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:20:51.301 Initializing NVMe Controllers 00:20:51.301 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:20:51.301 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.301 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.301 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.301 00:20:51.301 real 0m1.113s 00:20:51.301 user 0m0.939s 00:20:51.301 sys 0m0.163s 00:20:51.301 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.301 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:51.301 ************************************ 00:20:51.301 END TEST nvmf_target_disconnect_tc1 00:20:51.301 ************************************ 00:20:51.301 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:20:51.301 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:51.302 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.302 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:51.560 ************************************ 00:20:51.560 START TEST nvmf_target_disconnect_tc2 00:20:51.560 ************************************ 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3991566 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3991566 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3991566 ']' 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:51.560 21:17:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:51.560 [2024-11-26 21:17:38.814684] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:20:51.560 [2024-11-26 21:17:38.814725] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.560 [2024-11-26 21:17:38.889931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:51.560 [2024-11-26 21:17:38.928356] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.560 [2024-11-26 21:17:38.928392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.560 [2024-11-26 21:17:38.928401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.560 [2024-11-26 21:17:38.928407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.560 [2024-11-26 21:17:38.928412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.560 [2024-11-26 21:17:38.929918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:51.560 [2024-11-26 21:17:38.930030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:51.560 [2024-11-26 21:17:38.930136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:51.560 [2024-11-26 21:17:38.930138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.493 Malloc0 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.493 [2024-11-26 21:17:39.708108] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1921370/0x192d2c0) succeed. 00:20:52.493 [2024-11-26 21:17:39.716403] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1922a00/0x196e960) succeed. 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.493 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.494 [2024-11-26 21:17:39.848901] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3991838 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:20:52.494 21:17:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:55.021 21:17:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3991566 00:20:55.021 21:17:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Write completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 Read completed with error (sct=0, sc=8) 00:20:55.956 starting I/O failed 00:20:55.956 [2024-11-26 21:17:43.032874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:56.524 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3991566 Killed "${NVMF_APP[@]}" "$@" 00:20:56.524 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:20:56.524 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:20:56.524 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.524 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.524 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.524 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3992625 00:20:56.524 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3992625 00:20:56.524 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:20:56.524 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3992625 ']' 00:20:56.524 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.525 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.525 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.525 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.525 21:17:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:56.525 [2024-11-26 21:17:43.922814] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:20:56.525 [2024-11-26 21:17:43.922865] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.783 [2024-11-26 21:17:43.997284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.783 [2024-11-26 21:17:44.034066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.783 [2024-11-26 21:17:44.034102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.783 [2024-11-26 21:17:44.034110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.783 [2024-11-26 21:17:44.034116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.783 [2024-11-26 21:17:44.034121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.783 [2024-11-26 21:17:44.035438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:56.783 [2024-11-26 21:17:44.035548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:56.783 [2024-11-26 21:17:44.035652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:56.783 [2024-11-26 21:17:44.035654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Write completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 Read completed with error (sct=0, sc=8) 00:20:56.783 starting I/O failed 00:20:56.783 [2024-11-26 21:17:44.037941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:57.349 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.349 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:57.349 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.349 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.349 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.349 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.349 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:57.349 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.349 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.607 Malloc0 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.607 [2024-11-26 21:17:44.818803] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdac370/0xdb82c0) succeed. 00:20:57.607 [2024-11-26 21:17:44.827513] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdada00/0xdf9960) succeed. 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.607 [2024-11-26 21:17:44.960072] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.607 21:17:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3991838 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Read completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.607 Write completed with error (sct=0, sc=8) 00:20:57.607 starting I/O failed 00:20:57.866 [2024-11-26 21:17:45.042846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.866 [2024-11-26 21:17:45.048278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.866 [2024-11-26 21:17:45.048329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.866 [2024-11-26 21:17:45.048347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.866 [2024-11-26 21:17:45.048355] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.866 [2024-11-26 21:17:45.048361] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.866 [2024-11-26 21:17:45.058365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.866 qpair failed and we were unable to recover it. 00:20:57.866 [2024-11-26 21:17:45.068164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.866 [2024-11-26 21:17:45.068205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.866 [2024-11-26 21:17:45.068221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.866 [2024-11-26 21:17:45.068228] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.866 [2024-11-26 21:17:45.068235] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.866 [2024-11-26 21:17:45.078425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.866 qpair failed and we were unable to recover it. 00:20:57.866 [2024-11-26 21:17:45.088210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.866 [2024-11-26 21:17:45.088257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.866 [2024-11-26 21:17:45.088273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.866 [2024-11-26 21:17:45.088284] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.866 [2024-11-26 21:17:45.088290] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.866 [2024-11-26 21:17:45.098417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.866 qpair failed and we were unable to recover it. 00:20:57.866 [2024-11-26 21:17:45.108341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.866 [2024-11-26 21:17:45.108383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.866 [2024-11-26 21:17:45.108399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.866 [2024-11-26 21:17:45.108405] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.866 [2024-11-26 21:17:45.108411] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.866 [2024-11-26 21:17:45.118541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.866 qpair failed and we were unable to recover it. 00:20:57.866 [2024-11-26 21:17:45.128478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.866 [2024-11-26 21:17:45.128518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.866 [2024-11-26 21:17:45.128535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.867 [2024-11-26 21:17:45.128541] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.867 [2024-11-26 21:17:45.128547] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.867 [2024-11-26 21:17:45.138625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.867 qpair failed and we were unable to recover it. 00:20:57.867 [2024-11-26 21:17:45.148429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.867 [2024-11-26 21:17:45.148464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.867 [2024-11-26 21:17:45.148479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.867 [2024-11-26 21:17:45.148486] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.867 [2024-11-26 21:17:45.148492] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.867 [2024-11-26 21:17:45.158601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.867 qpair failed and we were unable to recover it. 00:20:57.867 [2024-11-26 21:17:45.168556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.867 [2024-11-26 21:17:45.168594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.867 [2024-11-26 21:17:45.168610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.867 [2024-11-26 21:17:45.168617] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.867 [2024-11-26 21:17:45.168622] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.867 [2024-11-26 21:17:45.178744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.867 qpair failed and we were unable to recover it. 00:20:57.867 [2024-11-26 21:17:45.188535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.867 [2024-11-26 21:17:45.188575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.867 [2024-11-26 21:17:45.188590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.867 [2024-11-26 21:17:45.188597] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.867 [2024-11-26 21:17:45.188602] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.867 [2024-11-26 21:17:45.198773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.867 qpair failed and we were unable to recover it. 00:20:57.867 [2024-11-26 21:17:45.208564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.867 [2024-11-26 21:17:45.208604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.867 [2024-11-26 21:17:45.208619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.867 [2024-11-26 21:17:45.208626] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.867 [2024-11-26 21:17:45.208631] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.867 [2024-11-26 21:17:45.218802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.867 qpair failed and we were unable to recover it. 00:20:57.867 [2024-11-26 21:17:45.228589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.867 [2024-11-26 21:17:45.228625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.867 [2024-11-26 21:17:45.228640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.867 [2024-11-26 21:17:45.228648] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.867 [2024-11-26 21:17:45.228653] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.867 [2024-11-26 21:17:45.239093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.867 qpair failed and we were unable to recover it. 00:20:57.867 [2024-11-26 21:17:45.248750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.867 [2024-11-26 21:17:45.248792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.867 [2024-11-26 21:17:45.248808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.867 [2024-11-26 21:17:45.248815] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.867 [2024-11-26 21:17:45.248821] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.867 [2024-11-26 21:17:45.258786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.867 qpair failed and we were unable to recover it. 00:20:57.867 [2024-11-26 21:17:45.268647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.867 [2024-11-26 21:17:45.268688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.867 [2024-11-26 21:17:45.268704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.867 [2024-11-26 21:17:45.268710] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.867 [2024-11-26 21:17:45.268716] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.867 [2024-11-26 21:17:45.279029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.867 qpair failed and we were unable to recover it. 00:20:57.867 [2024-11-26 21:17:45.288695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:57.867 [2024-11-26 21:17:45.288739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:57.867 [2024-11-26 21:17:45.288755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:57.867 [2024-11-26 21:17:45.288761] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:57.867 [2024-11-26 21:17:45.288767] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:57.867 [2024-11-26 21:17:45.299091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.867 qpair failed and we were unable to recover it. 00:20:58.139 [2024-11-26 21:17:45.308728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.139 [2024-11-26 21:17:45.308770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.139 [2024-11-26 21:17:45.308785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.139 [2024-11-26 21:17:45.308792] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.139 [2024-11-26 21:17:45.308797] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.139 [2024-11-26 21:17:45.319089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.139 qpair failed and we were unable to recover it. 00:20:58.139 [2024-11-26 21:17:45.328976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.139 [2024-11-26 21:17:45.329011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.139 [2024-11-26 21:17:45.329026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.139 [2024-11-26 21:17:45.329032] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.139 [2024-11-26 21:17:45.329038] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.139 [2024-11-26 21:17:45.338979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.139 qpair failed and we were unable to recover it. 00:20:58.139 [2024-11-26 21:17:45.348890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.139 [2024-11-26 21:17:45.348931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.139 [2024-11-26 21:17:45.348949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.139 [2024-11-26 21:17:45.348956] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.139 [2024-11-26 21:17:45.348961] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.139 [2024-11-26 21:17:45.359104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.139 qpair failed and we were unable to recover it. 00:20:58.139 [2024-11-26 21:17:45.369055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.139 [2024-11-26 21:17:45.369094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.139 [2024-11-26 21:17:45.369111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.139 [2024-11-26 21:17:45.369118] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.139 [2024-11-26 21:17:45.369124] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.139 [2024-11-26 21:17:45.379212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.139 qpair failed and we were unable to recover it. 00:20:58.139 [2024-11-26 21:17:45.389032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.139 [2024-11-26 21:17:45.389071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.139 [2024-11-26 21:17:45.389086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.139 [2024-11-26 21:17:45.389093] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.139 [2024-11-26 21:17:45.389098] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.139 [2024-11-26 21:17:45.399279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.139 qpair failed and we were unable to recover it. 00:20:58.140 [2024-11-26 21:17:45.409050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.140 [2024-11-26 21:17:45.409092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.140 [2024-11-26 21:17:45.409108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.140 [2024-11-26 21:17:45.409114] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.140 [2024-11-26 21:17:45.409120] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.140 [2024-11-26 21:17:45.419317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.140 qpair failed and we were unable to recover it. 00:20:58.140 [2024-11-26 21:17:45.429385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.140 [2024-11-26 21:17:45.429424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.140 [2024-11-26 21:17:45.429438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.140 [2024-11-26 21:17:45.429444] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.140 [2024-11-26 21:17:45.429454] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.140 [2024-11-26 21:17:45.439415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.140 qpair failed and we were unable to recover it. 00:20:58.140 [2024-11-26 21:17:45.449294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.140 [2024-11-26 21:17:45.449338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.140 [2024-11-26 21:17:45.449353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.140 [2024-11-26 21:17:45.449360] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.140 [2024-11-26 21:17:45.449366] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.140 [2024-11-26 21:17:45.459373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.140 qpair failed and we were unable to recover it. 00:20:58.140 [2024-11-26 21:17:45.469231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.140 [2024-11-26 21:17:45.469276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.140 [2024-11-26 21:17:45.469292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.140 [2024-11-26 21:17:45.469298] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.140 [2024-11-26 21:17:45.469304] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.140 [2024-11-26 21:17:45.479582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.140 qpair failed and we were unable to recover it. 00:20:58.140 [2024-11-26 21:17:45.489312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.140 [2024-11-26 21:17:45.489349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.140 [2024-11-26 21:17:45.489364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.140 [2024-11-26 21:17:45.489371] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.140 [2024-11-26 21:17:45.489376] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.140 [2024-11-26 21:17:45.499432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.140 qpair failed and we were unable to recover it. 00:20:58.140 [2024-11-26 21:17:45.509316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.140 [2024-11-26 21:17:45.509355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.140 [2024-11-26 21:17:45.509371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.140 [2024-11-26 21:17:45.509377] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.140 [2024-11-26 21:17:45.509383] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.140 [2024-11-26 21:17:45.519789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.140 qpair failed and we were unable to recover it. 00:20:58.140 [2024-11-26 21:17:45.529453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.140 [2024-11-26 21:17:45.529498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.140 [2024-11-26 21:17:45.529513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.140 [2024-11-26 21:17:45.529520] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.140 [2024-11-26 21:17:45.529525] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.140 [2024-11-26 21:17:45.539747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.140 qpair failed and we were unable to recover it. 00:20:58.140 [2024-11-26 21:17:45.549433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.140 [2024-11-26 21:17:45.549470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.140 [2024-11-26 21:17:45.549486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.140 [2024-11-26 21:17:45.549492] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.140 [2024-11-26 21:17:45.549498] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.140 [2024-11-26 21:17:45.559654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.140 qpair failed and we were unable to recover it. 00:20:58.140 [2024-11-26 21:17:45.569504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.140 [2024-11-26 21:17:45.569542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.140 [2024-11-26 21:17:45.569557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.140 [2024-11-26 21:17:45.569564] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.140 [2024-11-26 21:17:45.569569] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.399 [2024-11-26 21:17:45.579891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.399 qpair failed and we were unable to recover it. 00:20:58.399 [2024-11-26 21:17:45.589482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.399 [2024-11-26 21:17:45.589524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.399 [2024-11-26 21:17:45.589539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.399 [2024-11-26 21:17:45.589545] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.399 [2024-11-26 21:17:45.589551] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.399 [2024-11-26 21:17:45.599903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.399 qpair failed and we were unable to recover it. 00:20:58.399 [2024-11-26 21:17:45.609574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.399 [2024-11-26 21:17:45.609613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.399 [2024-11-26 21:17:45.609628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.399 [2024-11-26 21:17:45.609634] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.399 [2024-11-26 21:17:45.609640] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.399 [2024-11-26 21:17:45.619983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.399 qpair failed and we were unable to recover it. 00:20:58.399 [2024-11-26 21:17:45.629679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.399 [2024-11-26 21:17:45.629716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.399 [2024-11-26 21:17:45.629732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.399 [2024-11-26 21:17:45.629738] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.399 [2024-11-26 21:17:45.629744] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.399 [2024-11-26 21:17:45.639905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.399 qpair failed and we were unable to recover it. 00:20:58.399 [2024-11-26 21:17:45.649731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.399 [2024-11-26 21:17:45.649770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.399 [2024-11-26 21:17:45.649785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.399 [2024-11-26 21:17:45.649792] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.399 [2024-11-26 21:17:45.649798] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.399 [2024-11-26 21:17:45.660024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.399 qpair failed and we were unable to recover it. 00:20:58.399 [2024-11-26 21:17:45.669867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.399 [2024-11-26 21:17:45.669907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.399 [2024-11-26 21:17:45.669922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.399 [2024-11-26 21:17:45.669929] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.399 [2024-11-26 21:17:45.669934] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.399 [2024-11-26 21:17:45.680255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.399 qpair failed and we were unable to recover it. 00:20:58.399 [2024-11-26 21:17:45.689881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.399 [2024-11-26 21:17:45.689921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.399 [2024-11-26 21:17:45.689939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.399 [2024-11-26 21:17:45.689945] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.399 [2024-11-26 21:17:45.689951] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.399 [2024-11-26 21:17:45.700107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.400 qpair failed and we were unable to recover it. 00:20:58.400 [2024-11-26 21:17:45.709971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.400 [2024-11-26 21:17:45.710006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.400 [2024-11-26 21:17:45.710023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.400 [2024-11-26 21:17:45.710030] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.400 [2024-11-26 21:17:45.710036] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.400 [2024-11-26 21:17:45.720269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.400 qpair failed and we were unable to recover it. 00:20:58.400 [2024-11-26 21:17:45.729938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.400 [2024-11-26 21:17:45.729974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.400 [2024-11-26 21:17:45.729990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.400 [2024-11-26 21:17:45.729997] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.400 [2024-11-26 21:17:45.730002] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.400 [2024-11-26 21:17:45.740276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.400 qpair failed and we were unable to recover it. 00:20:58.400 [2024-11-26 21:17:45.749997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.400 [2024-11-26 21:17:45.750038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.400 [2024-11-26 21:17:45.750052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.400 [2024-11-26 21:17:45.750059] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.400 [2024-11-26 21:17:45.750065] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.400 [2024-11-26 21:17:45.760452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.400 qpair failed and we were unable to recover it. 00:20:58.400 [2024-11-26 21:17:45.770098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.400 [2024-11-26 21:17:45.770141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.400 [2024-11-26 21:17:45.770156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.400 [2024-11-26 21:17:45.770163] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.400 [2024-11-26 21:17:45.770172] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.400 [2024-11-26 21:17:45.780456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.400 qpair failed and we were unable to recover it. 00:20:58.400 [2024-11-26 21:17:45.790166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.400 [2024-11-26 21:17:45.790206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.400 [2024-11-26 21:17:45.790221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.400 [2024-11-26 21:17:45.790228] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.400 [2024-11-26 21:17:45.790233] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.400 [2024-11-26 21:17:45.800692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.400 qpair failed and we were unable to recover it. 00:20:58.400 [2024-11-26 21:17:45.810275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.400 [2024-11-26 21:17:45.810309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.400 [2024-11-26 21:17:45.810324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.400 [2024-11-26 21:17:45.810331] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.400 [2024-11-26 21:17:45.810336] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.400 [2024-11-26 21:17:45.820589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.400 qpair failed and we were unable to recover it. 00:20:58.400 [2024-11-26 21:17:45.830308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.400 [2024-11-26 21:17:45.830350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.400 [2024-11-26 21:17:45.830365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.400 [2024-11-26 21:17:45.830371] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.400 [2024-11-26 21:17:45.830377] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.658 [2024-11-26 21:17:45.840686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.659 qpair failed and we were unable to recover it. 00:20:58.659 [2024-11-26 21:17:45.850328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.659 [2024-11-26 21:17:45.850372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.659 [2024-11-26 21:17:45.850387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.659 [2024-11-26 21:17:45.850394] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.659 [2024-11-26 21:17:45.850399] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.659 [2024-11-26 21:17:45.860595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.659 qpair failed and we were unable to recover it. 00:20:58.659 [2024-11-26 21:17:45.870351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.659 [2024-11-26 21:17:45.870393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.659 [2024-11-26 21:17:45.870409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.659 [2024-11-26 21:17:45.870415] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.659 [2024-11-26 21:17:45.870421] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.659 [2024-11-26 21:17:45.880968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.659 qpair failed and we were unable to recover it. 00:20:58.659 [2024-11-26 21:17:45.890292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.659 [2024-11-26 21:17:45.890333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.659 [2024-11-26 21:17:45.890348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.659 [2024-11-26 21:17:45.890355] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.659 [2024-11-26 21:17:45.890360] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.659 [2024-11-26 21:17:45.900733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.659 qpair failed and we were unable to recover it. 00:20:58.659 [2024-11-26 21:17:45.910552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.659 [2024-11-26 21:17:45.910593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.659 [2024-11-26 21:17:45.910608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.659 [2024-11-26 21:17:45.910614] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.659 [2024-11-26 21:17:45.910620] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.659 [2024-11-26 21:17:45.920949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.659 qpair failed and we were unable to recover it. 00:20:58.659 [2024-11-26 21:17:45.930608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.659 [2024-11-26 21:17:45.930647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.659 [2024-11-26 21:17:45.930662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.659 [2024-11-26 21:17:45.930669] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.659 [2024-11-26 21:17:45.930674] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.659 [2024-11-26 21:17:45.940857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.659 qpair failed and we were unable to recover it. 00:20:58.659 [2024-11-26 21:17:45.950517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.659 [2024-11-26 21:17:45.950560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.659 [2024-11-26 21:17:45.950575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.659 [2024-11-26 21:17:45.950582] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.659 [2024-11-26 21:17:45.950588] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.659 [2024-11-26 21:17:45.960881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.659 qpair failed and we were unable to recover it. 00:20:58.659 [2024-11-26 21:17:45.970627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.659 [2024-11-26 21:17:45.970659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.659 [2024-11-26 21:17:45.970674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.659 [2024-11-26 21:17:45.970680] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.659 [2024-11-26 21:17:45.970686] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.659 [2024-11-26 21:17:45.980965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.659 qpair failed and we were unable to recover it. 00:20:58.659 [2024-11-26 21:17:45.990791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.659 [2024-11-26 21:17:45.990833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.659 [2024-11-26 21:17:45.990848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.659 [2024-11-26 21:17:45.990855] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.659 [2024-11-26 21:17:45.990860] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.659 [2024-11-26 21:17:46.000887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.659 qpair failed and we were unable to recover it. 00:20:58.659 [2024-11-26 21:17:46.010818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.659 [2024-11-26 21:17:46.010855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.659 [2024-11-26 21:17:46.010870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.660 [2024-11-26 21:17:46.010877] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.660 [2024-11-26 21:17:46.010882] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.660 [2024-11-26 21:17:46.021103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.660 qpair failed and we were unable to recover it. 00:20:58.660 [2024-11-26 21:17:46.030758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.660 [2024-11-26 21:17:46.030798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.660 [2024-11-26 21:17:46.030816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.660 [2024-11-26 21:17:46.030823] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.660 [2024-11-26 21:17:46.030829] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.660 [2024-11-26 21:17:46.041166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.660 qpair failed and we were unable to recover it. 00:20:58.660 [2024-11-26 21:17:46.050987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.660 [2024-11-26 21:17:46.051023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.660 [2024-11-26 21:17:46.051038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.660 [2024-11-26 21:17:46.051045] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.660 [2024-11-26 21:17:46.051051] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.660 [2024-11-26 21:17:46.061048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.660 qpair failed and we were unable to recover it. 00:20:58.660 [2024-11-26 21:17:46.071016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.660 [2024-11-26 21:17:46.071056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.660 [2024-11-26 21:17:46.071071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.660 [2024-11-26 21:17:46.071077] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.660 [2024-11-26 21:17:46.071083] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.660 [2024-11-26 21:17:46.081297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.660 qpair failed and we were unable to recover it. 00:20:58.660 [2024-11-26 21:17:46.090932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.660 [2024-11-26 21:17:46.090974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.660 [2024-11-26 21:17:46.090989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.660 [2024-11-26 21:17:46.090996] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.660 [2024-11-26 21:17:46.091002] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.918 [2024-11-26 21:17:46.101366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.918 qpair failed and we were unable to recover it. 00:20:58.918 [2024-11-26 21:17:46.111069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.918 [2024-11-26 21:17:46.111106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.918 [2024-11-26 21:17:46.111121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.918 [2024-11-26 21:17:46.111128] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.918 [2024-11-26 21:17:46.111136] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.918 [2024-11-26 21:17:46.121533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.918 qpair failed and we were unable to recover it. 00:20:58.918 [2024-11-26 21:17:46.131100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.918 [2024-11-26 21:17:46.131142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.918 [2024-11-26 21:17:46.131157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.131164] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.131170] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.919 [2024-11-26 21:17:46.141451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.919 qpair failed and we were unable to recover it. 00:20:58.919 [2024-11-26 21:17:46.151119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.919 [2024-11-26 21:17:46.151160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.919 [2024-11-26 21:17:46.151174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.151181] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.151187] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.919 [2024-11-26 21:17:46.161364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.919 qpair failed and we were unable to recover it. 00:20:58.919 [2024-11-26 21:17:46.171180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.919 [2024-11-26 21:17:46.171219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.919 [2024-11-26 21:17:46.171234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.171240] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.171246] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.919 [2024-11-26 21:17:46.181626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.919 qpair failed and we were unable to recover it. 00:20:58.919 [2024-11-26 21:17:46.191209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.919 [2024-11-26 21:17:46.191247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.919 [2024-11-26 21:17:46.191267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.191274] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.191279] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.919 [2024-11-26 21:17:46.201523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.919 qpair failed and we were unable to recover it. 00:20:58.919 [2024-11-26 21:17:46.211237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.919 [2024-11-26 21:17:46.211278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.919 [2024-11-26 21:17:46.211294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.211301] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.211306] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.919 [2024-11-26 21:17:46.221691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.919 qpair failed and we were unable to recover it. 00:20:58.919 [2024-11-26 21:17:46.231405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.919 [2024-11-26 21:17:46.231446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.919 [2024-11-26 21:17:46.231461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.231468] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.231473] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.919 [2024-11-26 21:17:46.241711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.919 qpair failed and we were unable to recover it. 00:20:58.919 [2024-11-26 21:17:46.251392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.919 [2024-11-26 21:17:46.251427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.919 [2024-11-26 21:17:46.251443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.251449] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.251455] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.919 [2024-11-26 21:17:46.261689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.919 qpair failed and we were unable to recover it. 00:20:58.919 [2024-11-26 21:17:46.271471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.919 [2024-11-26 21:17:46.271516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.919 [2024-11-26 21:17:46.271532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.271539] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.271544] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.919 [2024-11-26 21:17:46.281879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.919 qpair failed and we were unable to recover it. 00:20:58.919 [2024-11-26 21:17:46.291582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.919 [2024-11-26 21:17:46.291627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.919 [2024-11-26 21:17:46.291643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.291649] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.291655] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.919 [2024-11-26 21:17:46.301776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.919 qpair failed and we were unable to recover it. 00:20:58.919 [2024-11-26 21:17:46.311647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.919 [2024-11-26 21:17:46.311685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.919 [2024-11-26 21:17:46.311701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.311707] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.311713] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.919 [2024-11-26 21:17:46.322126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.919 qpair failed and we were unable to recover it. 00:20:58.919 [2024-11-26 21:17:46.331670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.919 [2024-11-26 21:17:46.331709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.919 [2024-11-26 21:17:46.331724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.331730] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.331736] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:58.919 [2024-11-26 21:17:46.342014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:58.919 qpair failed and we were unable to recover it. 00:20:58.919 [2024-11-26 21:17:46.351695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:58.919 [2024-11-26 21:17:46.351738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:58.919 [2024-11-26 21:17:46.351753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:58.919 [2024-11-26 21:17:46.351760] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:58.919 [2024-11-26 21:17:46.351765] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.177 [2024-11-26 21:17:46.362165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.177 qpair failed and we were unable to recover it. 00:20:59.177 [2024-11-26 21:17:46.371781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.177 [2024-11-26 21:17:46.371822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.177 [2024-11-26 21:17:46.371840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.177 [2024-11-26 21:17:46.371850] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.177 [2024-11-26 21:17:46.371855] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.177 [2024-11-26 21:17:46.382046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.177 qpair failed and we were unable to recover it. 00:20:59.177 [2024-11-26 21:17:46.391911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.177 [2024-11-26 21:17:46.391952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.177 [2024-11-26 21:17:46.391967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.177 [2024-11-26 21:17:46.391973] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.177 [2024-11-26 21:17:46.391979] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.177 [2024-11-26 21:17:46.402163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.177 qpair failed and we were unable to recover it. 00:20:59.177 [2024-11-26 21:17:46.411885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.177 [2024-11-26 21:17:46.411930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.177 [2024-11-26 21:17:46.411946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.177 [2024-11-26 21:17:46.411953] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.177 [2024-11-26 21:17:46.411958] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.178 [2024-11-26 21:17:46.422321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.178 qpair failed and we were unable to recover it. 00:20:59.178 [2024-11-26 21:17:46.432014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.178 [2024-11-26 21:17:46.432048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.178 [2024-11-26 21:17:46.432063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.178 [2024-11-26 21:17:46.432070] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.178 [2024-11-26 21:17:46.432076] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.178 [2024-11-26 21:17:46.442214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.178 qpair failed and we were unable to recover it. 00:20:59.178 [2024-11-26 21:17:46.451991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.178 [2024-11-26 21:17:46.452030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.178 [2024-11-26 21:17:46.452046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.178 [2024-11-26 21:17:46.452052] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.178 [2024-11-26 21:17:46.452057] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.178 [2024-11-26 21:17:46.462339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.178 qpair failed and we were unable to recover it. 00:20:59.178 [2024-11-26 21:17:46.472140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.178 [2024-11-26 21:17:46.472180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.178 [2024-11-26 21:17:46.472196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.178 [2024-11-26 21:17:46.472202] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.178 [2024-11-26 21:17:46.472208] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.178 [2024-11-26 21:17:46.482501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.178 qpair failed and we were unable to recover it. 00:20:59.178 [2024-11-26 21:17:46.492106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.178 [2024-11-26 21:17:46.492145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.178 [2024-11-26 21:17:46.492159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.178 [2024-11-26 21:17:46.492166] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.178 [2024-11-26 21:17:46.492171] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.178 [2024-11-26 21:17:46.502423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.178 qpair failed and we were unable to recover it. 00:20:59.178 [2024-11-26 21:17:46.512273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.178 [2024-11-26 21:17:46.512309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.178 [2024-11-26 21:17:46.512325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.178 [2024-11-26 21:17:46.512332] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.178 [2024-11-26 21:17:46.512337] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.178 [2024-11-26 21:17:46.522864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.178 qpair failed and we were unable to recover it. 00:20:59.178 [2024-11-26 21:17:46.532146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.178 [2024-11-26 21:17:46.532187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.178 [2024-11-26 21:17:46.532202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.178 [2024-11-26 21:17:46.532209] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.178 [2024-11-26 21:17:46.532215] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.178 [2024-11-26 21:17:46.542623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.178 qpair failed and we were unable to recover it. 00:20:59.178 [2024-11-26 21:17:46.552300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.178 [2024-11-26 21:17:46.552341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.178 [2024-11-26 21:17:46.552356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.178 [2024-11-26 21:17:46.552363] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.178 [2024-11-26 21:17:46.552369] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.178 [2024-11-26 21:17:46.562479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.178 qpair failed and we were unable to recover it. 00:20:59.178 [2024-11-26 21:17:46.573563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.178 [2024-11-26 21:17:46.573604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.178 [2024-11-26 21:17:46.573619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.178 [2024-11-26 21:17:46.573626] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.178 [2024-11-26 21:17:46.573631] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.178 [2024-11-26 21:17:46.582601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.178 qpair failed and we were unable to recover it. 00:20:59.178 [2024-11-26 21:17:46.592422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.178 [2024-11-26 21:17:46.592465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.178 [2024-11-26 21:17:46.592481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.178 [2024-11-26 21:17:46.592487] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.178 [2024-11-26 21:17:46.592493] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.178 [2024-11-26 21:17:46.602725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.178 qpair failed and we were unable to recover it. 00:20:59.437 [2024-11-26 21:17:46.612447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.437 [2024-11-26 21:17:46.612488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.437 [2024-11-26 21:17:46.612503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.437 [2024-11-26 21:17:46.612510] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.437 [2024-11-26 21:17:46.612515] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.437 [2024-11-26 21:17:46.622670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.437 qpair failed and we were unable to recover it. 00:20:59.437 [2024-11-26 21:17:46.632663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.437 [2024-11-26 21:17:46.632706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.437 [2024-11-26 21:17:46.632724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.437 [2024-11-26 21:17:46.632731] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.437 [2024-11-26 21:17:46.632737] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.437 [2024-11-26 21:17:46.643070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.437 qpair failed and we were unable to recover it. 00:20:59.437 [2024-11-26 21:17:46.652508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.437 [2024-11-26 21:17:46.652549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.437 [2024-11-26 21:17:46.652565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.437 [2024-11-26 21:17:46.652572] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.437 [2024-11-26 21:17:46.652577] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.437 [2024-11-26 21:17:46.662710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.437 qpair failed and we were unable to recover it. 00:20:59.437 [2024-11-26 21:17:46.672761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.437 [2024-11-26 21:17:46.672794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.437 [2024-11-26 21:17:46.672809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.437 [2024-11-26 21:17:46.672816] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.437 [2024-11-26 21:17:46.672821] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.437 [2024-11-26 21:17:46.682982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.437 qpair failed and we were unable to recover it. 00:20:59.437 [2024-11-26 21:17:46.692679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.437 [2024-11-26 21:17:46.692719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.438 [2024-11-26 21:17:46.692735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.438 [2024-11-26 21:17:46.692741] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.438 [2024-11-26 21:17:46.692747] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.438 [2024-11-26 21:17:46.703014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.438 qpair failed and we were unable to recover it. 00:20:59.438 [2024-11-26 21:17:46.712875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.438 [2024-11-26 21:17:46.712916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.438 [2024-11-26 21:17:46.712931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.438 [2024-11-26 21:17:46.712941] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.438 [2024-11-26 21:17:46.712947] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.438 [2024-11-26 21:17:46.723249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.438 qpair failed and we were unable to recover it. 00:20:59.438 [2024-11-26 21:17:46.732783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.438 [2024-11-26 21:17:46.732823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.438 [2024-11-26 21:17:46.732838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.438 [2024-11-26 21:17:46.732845] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.438 [2024-11-26 21:17:46.732851] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.438 [2024-11-26 21:17:46.743139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.438 qpair failed and we were unable to recover it. 00:20:59.438 [2024-11-26 21:17:46.752935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.438 [2024-11-26 21:17:46.752972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.438 [2024-11-26 21:17:46.752987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.438 [2024-11-26 21:17:46.752994] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.438 [2024-11-26 21:17:46.753000] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.438 [2024-11-26 21:17:46.763209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.438 qpair failed and we were unable to recover it. 00:20:59.438 [2024-11-26 21:17:46.772965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.438 [2024-11-26 21:17:46.772999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.438 [2024-11-26 21:17:46.773014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.438 [2024-11-26 21:17:46.773021] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.438 [2024-11-26 21:17:46.773026] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.438 [2024-11-26 21:17:46.783272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.438 qpair failed and we were unable to recover it. 00:20:59.438 [2024-11-26 21:17:46.793036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.438 [2024-11-26 21:17:46.793078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.438 [2024-11-26 21:17:46.793093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.438 [2024-11-26 21:17:46.793099] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.438 [2024-11-26 21:17:46.793105] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.438 [2024-11-26 21:17:46.803404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.438 qpair failed and we were unable to recover it. 00:20:59.438 [2024-11-26 21:17:46.813026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.438 [2024-11-26 21:17:46.813069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.438 [2024-11-26 21:17:46.813085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.438 [2024-11-26 21:17:46.813093] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.438 [2024-11-26 21:17:46.813098] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.438 [2024-11-26 21:17:46.823435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.438 qpair failed and we were unable to recover it. 00:20:59.438 [2024-11-26 21:17:46.833242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.438 [2024-11-26 21:17:46.833287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.438 [2024-11-26 21:17:46.833302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.438 [2024-11-26 21:17:46.833309] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.438 [2024-11-26 21:17:46.833315] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.438 [2024-11-26 21:17:46.843402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.438 qpair failed and we were unable to recover it. 00:20:59.438 [2024-11-26 21:17:46.853402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.438 [2024-11-26 21:17:46.853441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.438 [2024-11-26 21:17:46.853457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.438 [2024-11-26 21:17:46.853464] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.438 [2024-11-26 21:17:46.853469] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.438 [2024-11-26 21:17:46.863567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.438 qpair failed and we were unable to recover it. 00:20:59.697 [2024-11-26 21:17:46.873417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.697 [2024-11-26 21:17:46.873456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.697 [2024-11-26 21:17:46.873471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.697 [2024-11-26 21:17:46.873478] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.697 [2024-11-26 21:17:46.873484] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.697 [2024-11-26 21:17:46.883736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-11-26 21:17:46.893395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.697 [2024-11-26 21:17:46.893440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.697 [2024-11-26 21:17:46.893455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.697 [2024-11-26 21:17:46.893461] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.697 [2024-11-26 21:17:46.893467] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.697 [2024-11-26 21:17:46.903480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-11-26 21:17:46.913413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.697 [2024-11-26 21:17:46.913449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.697 [2024-11-26 21:17:46.913466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.697 [2024-11-26 21:17:46.913472] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.697 [2024-11-26 21:17:46.913478] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.697 [2024-11-26 21:17:46.923747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-11-26 21:17:46.933394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.697 [2024-11-26 21:17:46.933437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.697 [2024-11-26 21:17:46.933452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.697 [2024-11-26 21:17:46.933460] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.697 [2024-11-26 21:17:46.933465] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.697 [2024-11-26 21:17:46.943854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-11-26 21:17:46.953569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.697 [2024-11-26 21:17:46.953610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.697 [2024-11-26 21:17:46.953626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.697 [2024-11-26 21:17:46.953633] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.697 [2024-11-26 21:17:46.953639] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.697 [2024-11-26 21:17:46.963687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-11-26 21:17:46.973602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.697 [2024-11-26 21:17:46.973642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.697 [2024-11-26 21:17:46.973661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.697 [2024-11-26 21:17:46.973667] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.697 [2024-11-26 21:17:46.973673] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.697 [2024-11-26 21:17:46.983884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-11-26 21:17:46.993588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.697 [2024-11-26 21:17:46.993622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.697 [2024-11-26 21:17:46.993638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.697 [2024-11-26 21:17:46.993644] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.697 [2024-11-26 21:17:46.993651] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.697 [2024-11-26 21:17:47.003950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-11-26 21:17:47.013750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.697 [2024-11-26 21:17:47.013786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.697 [2024-11-26 21:17:47.013802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.697 [2024-11-26 21:17:47.013809] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.697 [2024-11-26 21:17:47.013815] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.697 [2024-11-26 21:17:47.023825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.697 qpair failed and we were unable to recover it. 00:20:59.697 [2024-11-26 21:17:47.033836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.697 [2024-11-26 21:17:47.033876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.697 [2024-11-26 21:17:47.033891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.697 [2024-11-26 21:17:47.033898] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.697 [2024-11-26 21:17:47.033903] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.698 [2024-11-26 21:17:47.044013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-11-26 21:17:47.053786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.698 [2024-11-26 21:17:47.053823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.698 [2024-11-26 21:17:47.053838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.698 [2024-11-26 21:17:47.053848] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.698 [2024-11-26 21:17:47.053854] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.698 [2024-11-26 21:17:47.063967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-11-26 21:17:47.073822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.698 [2024-11-26 21:17:47.073859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.698 [2024-11-26 21:17:47.073876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.698 [2024-11-26 21:17:47.073883] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.698 [2024-11-26 21:17:47.073889] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.698 [2024-11-26 21:17:47.084082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-11-26 21:17:47.093864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.698 [2024-11-26 21:17:47.093899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.698 [2024-11-26 21:17:47.093914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.698 [2024-11-26 21:17:47.093920] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.698 [2024-11-26 21:17:47.093927] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.698 [2024-11-26 21:17:47.104128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.698 [2024-11-26 21:17:47.113932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.698 [2024-11-26 21:17:47.113973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.698 [2024-11-26 21:17:47.113988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.698 [2024-11-26 21:17:47.113994] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.698 [2024-11-26 21:17:47.114000] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.698 [2024-11-26 21:17:47.124421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.698 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.134043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.134081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.134096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.134103] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.134108] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.144302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.154044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.154084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.154099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.154106] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.154111] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.164749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.174143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.174184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.174200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.174206] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.174212] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.184547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.194249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.194293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.194308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.194315] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.194320] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.204621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.214195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.214231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.214246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.214256] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.214262] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.224489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.234311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.234348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.234364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.234371] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.234376] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.244585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.254311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.254345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.254360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.254367] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.254372] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.264620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.274396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.274437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.274452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.274459] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.274464] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.284833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.294468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.294510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.294527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.294534] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.294539] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.304676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.314524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.314561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.314580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.314587] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.314592] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.324960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.334555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.334594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.334609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.334616] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.334621] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.344913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.354677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.354717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.354732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.354739] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.354744] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.365024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:20:59.957 [2024-11-26 21:17:47.374809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:20:59.957 [2024-11-26 21:17:47.374853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:20:59.957 [2024-11-26 21:17:47.374870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:20:59.957 [2024-11-26 21:17:47.374876] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:20:59.957 [2024-11-26 21:17:47.374882] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:20:59.957 [2024-11-26 21:17:47.384975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:59.957 qpair failed and we were unable to recover it. 00:21:00.215 [2024-11-26 21:17:47.394882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.215 [2024-11-26 21:17:47.394928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.215 [2024-11-26 21:17:47.394943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.215 [2024-11-26 21:17:47.394950] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.215 [2024-11-26 21:17:47.394959] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.215 [2024-11-26 21:17:47.405184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.215 qpair failed and we were unable to recover it. 00:21:00.216 [2024-11-26 21:17:47.414771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.216 [2024-11-26 21:17:47.414812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.216 [2024-11-26 21:17:47.414827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.216 [2024-11-26 21:17:47.414834] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.216 [2024-11-26 21:17:47.414840] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.216 [2024-11-26 21:17:47.425106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-11-26 21:17:47.434876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.216 [2024-11-26 21:17:47.434919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.216 [2024-11-26 21:17:47.434934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.216 [2024-11-26 21:17:47.434941] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.216 [2024-11-26 21:17:47.434946] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.216 [2024-11-26 21:17:47.445251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-11-26 21:17:47.454991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.216 [2024-11-26 21:17:47.455026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.216 [2024-11-26 21:17:47.455040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.216 [2024-11-26 21:17:47.455047] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.216 [2024-11-26 21:17:47.455052] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.216 [2024-11-26 21:17:47.465312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-11-26 21:17:47.475034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.216 [2024-11-26 21:17:47.475074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.216 [2024-11-26 21:17:47.475089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.216 [2024-11-26 21:17:47.475095] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.216 [2024-11-26 21:17:47.475101] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.216 [2024-11-26 21:17:47.485381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-11-26 21:17:47.495036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.216 [2024-11-26 21:17:47.495072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.216 [2024-11-26 21:17:47.495087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.216 [2024-11-26 21:17:47.495094] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.216 [2024-11-26 21:17:47.495099] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.216 [2024-11-26 21:17:47.505320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-11-26 21:17:47.515039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.216 [2024-11-26 21:17:47.515080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.216 [2024-11-26 21:17:47.515095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.216 [2024-11-26 21:17:47.515102] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.216 [2024-11-26 21:17:47.515108] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.216 [2024-11-26 21:17:47.525537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-11-26 21:17:47.535250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.216 [2024-11-26 21:17:47.535289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.216 [2024-11-26 21:17:47.535304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.216 [2024-11-26 21:17:47.535310] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.216 [2024-11-26 21:17:47.535316] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.216 [2024-11-26 21:17:47.545467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-11-26 21:17:47.555194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.216 [2024-11-26 21:17:47.555233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.216 [2024-11-26 21:17:47.555249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.216 [2024-11-26 21:17:47.555260] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.216 [2024-11-26 21:17:47.555265] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.216 [2024-11-26 21:17:47.565599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-11-26 21:17:47.575320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.216 [2024-11-26 21:17:47.575364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.216 [2024-11-26 21:17:47.575379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.216 [2024-11-26 21:17:47.575386] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.216 [2024-11-26 21:17:47.575391] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.216 [2024-11-26 21:17:47.585439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-11-26 21:17:47.595266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.216 [2024-11-26 21:17:47.595308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.216 [2024-11-26 21:17:47.595322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.216 [2024-11-26 21:17:47.595329] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.216 [2024-11-26 21:17:47.595334] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.216 [2024-11-26 21:17:47.605697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.216 qpair failed and we were unable to recover it. 00:21:00.216 [2024-11-26 21:17:47.615406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.216 [2024-11-26 21:17:47.615448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.216 [2024-11-26 21:17:47.615464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.217 [2024-11-26 21:17:47.615471] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.217 [2024-11-26 21:17:47.615476] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.217 [2024-11-26 21:17:47.625652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.217 [2024-11-26 21:17:47.635451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.217 [2024-11-26 21:17:47.635486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.217 [2024-11-26 21:17:47.635502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.217 [2024-11-26 21:17:47.635508] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.217 [2024-11-26 21:17:47.635514] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.217 [2024-11-26 21:17:47.645728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.217 qpair failed and we were unable to recover it. 00:21:00.473 [2024-11-26 21:17:47.655422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.473 [2024-11-26 21:17:47.655469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.473 [2024-11-26 21:17:47.655490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.473 [2024-11-26 21:17:47.655497] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.473 [2024-11-26 21:17:47.655502] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.473 [2024-11-26 21:17:47.665821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.473 qpair failed and we were unable to recover it. 00:21:00.473 [2024-11-26 21:17:47.675624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.473 [2024-11-26 21:17:47.675666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.473 [2024-11-26 21:17:47.675680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.473 [2024-11-26 21:17:47.675686] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.473 [2024-11-26 21:17:47.675692] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.473 [2024-11-26 21:17:47.686018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.473 qpair failed and we were unable to recover it. 00:21:00.473 [2024-11-26 21:17:47.695672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.473 [2024-11-26 21:17:47.695709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.473 [2024-11-26 21:17:47.695724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.473 [2024-11-26 21:17:47.695731] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.473 [2024-11-26 21:17:47.695736] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.474 [2024-11-26 21:17:47.705983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.474 qpair failed and we were unable to recover it. 00:21:00.474 [2024-11-26 21:17:47.715737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.474 [2024-11-26 21:17:47.715776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.474 [2024-11-26 21:17:47.715791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.474 [2024-11-26 21:17:47.715798] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.474 [2024-11-26 21:17:47.715804] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.474 [2024-11-26 21:17:47.725923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.474 qpair failed and we were unable to recover it. 00:21:00.474 [2024-11-26 21:17:47.735728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.474 [2024-11-26 21:17:47.735767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.474 [2024-11-26 21:17:47.735783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.474 [2024-11-26 21:17:47.735790] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.474 [2024-11-26 21:17:47.735799] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.474 [2024-11-26 21:17:47.746074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.474 qpair failed and we were unable to recover it. 00:21:00.474 [2024-11-26 21:17:47.755794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.474 [2024-11-26 21:17:47.755835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.474 [2024-11-26 21:17:47.755850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.474 [2024-11-26 21:17:47.755857] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.474 [2024-11-26 21:17:47.755862] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.474 [2024-11-26 21:17:47.766211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.474 qpair failed and we were unable to recover it. 00:21:00.474 [2024-11-26 21:17:47.775764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.474 [2024-11-26 21:17:47.775810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.474 [2024-11-26 21:17:47.775825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.474 [2024-11-26 21:17:47.775832] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.474 [2024-11-26 21:17:47.775838] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.474 [2024-11-26 21:17:47.786101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.474 qpair failed and we were unable to recover it. 00:21:00.474 [2024-11-26 21:17:47.795980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.474 [2024-11-26 21:17:47.796017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.474 [2024-11-26 21:17:47.796033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.474 [2024-11-26 21:17:47.796039] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.474 [2024-11-26 21:17:47.796045] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.474 [2024-11-26 21:17:47.806488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.474 qpair failed and we were unable to recover it. 00:21:00.474 [2024-11-26 21:17:47.815860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.474 [2024-11-26 21:17:47.815894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.474 [2024-11-26 21:17:47.815909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.474 [2024-11-26 21:17:47.815916] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.474 [2024-11-26 21:17:47.815922] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.474 [2024-11-26 21:17:47.826192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.474 qpair failed and we were unable to recover it. 00:21:00.474 [2024-11-26 21:17:47.836002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.474 [2024-11-26 21:17:47.836043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.474 [2024-11-26 21:17:47.836058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.474 [2024-11-26 21:17:47.836064] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.474 [2024-11-26 21:17:47.836069] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.474 [2024-11-26 21:17:47.846322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.474 qpair failed and we were unable to recover it. 00:21:00.474 [2024-11-26 21:17:47.856086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.474 [2024-11-26 21:17:47.856129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.474 [2024-11-26 21:17:47.856145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.474 [2024-11-26 21:17:47.856152] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.474 [2024-11-26 21:17:47.856158] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.474 [2024-11-26 21:17:47.866284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.474 qpair failed and we were unable to recover it. 00:21:00.474 [2024-11-26 21:17:47.876119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.474 [2024-11-26 21:17:47.876157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.474 [2024-11-26 21:17:47.876172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.474 [2024-11-26 21:17:47.876179] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.474 [2024-11-26 21:17:47.876186] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.474 [2024-11-26 21:17:47.886561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.474 qpair failed and we were unable to recover it. 00:21:00.474 [2024-11-26 21:17:47.896197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.474 [2024-11-26 21:17:47.896232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.474 [2024-11-26 21:17:47.896248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.474 [2024-11-26 21:17:47.896258] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.474 [2024-11-26 21:17:47.896265] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.474 [2024-11-26 21:17:47.906580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.474 qpair failed and we were unable to recover it. 00:21:00.731 [2024-11-26 21:17:47.916212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.731 [2024-11-26 21:17:47.916262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.731 [2024-11-26 21:17:47.916277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.731 [2024-11-26 21:17:47.916284] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.731 [2024-11-26 21:17:47.916290] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.731 [2024-11-26 21:17:47.926512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.731 qpair failed and we were unable to recover it. 00:21:00.731 [2024-11-26 21:17:47.936327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.731 [2024-11-26 21:17:47.936371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.731 [2024-11-26 21:17:47.936387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.731 [2024-11-26 21:17:47.936393] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.731 [2024-11-26 21:17:47.936399] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.731 [2024-11-26 21:17:47.946768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.731 qpair failed and we were unable to recover it. 00:21:00.731 [2024-11-26 21:17:47.956282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.732 [2024-11-26 21:17:47.956318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.732 [2024-11-26 21:17:47.956335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.732 [2024-11-26 21:17:47.956342] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.732 [2024-11-26 21:17:47.956348] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.732 [2024-11-26 21:17:47.966694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.732 qpair failed and we were unable to recover it. 00:21:00.732 [2024-11-26 21:17:47.976457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.732 [2024-11-26 21:17:47.976493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.732 [2024-11-26 21:17:47.976508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.732 [2024-11-26 21:17:47.976515] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.732 [2024-11-26 21:17:47.976520] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.732 [2024-11-26 21:17:47.986699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.732 qpair failed and we were unable to recover it. 00:21:00.732 [2024-11-26 21:17:47.996467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.732 [2024-11-26 21:17:47.996509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.732 [2024-11-26 21:17:47.996523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.732 [2024-11-26 21:17:47.996533] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.732 [2024-11-26 21:17:47.996539] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.732 [2024-11-26 21:17:48.006945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.732 qpair failed and we were unable to recover it. 00:21:00.732 [2024-11-26 21:17:48.016452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.732 [2024-11-26 21:17:48.016490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.732 [2024-11-26 21:17:48.016505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.732 [2024-11-26 21:17:48.016512] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.732 [2024-11-26 21:17:48.016517] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.732 [2024-11-26 21:17:48.026846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.732 qpair failed and we were unable to recover it. 00:21:00.732 [2024-11-26 21:17:48.036501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.732 [2024-11-26 21:17:48.036541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.732 [2024-11-26 21:17:48.036556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.732 [2024-11-26 21:17:48.036563] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.732 [2024-11-26 21:17:48.036569] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.732 [2024-11-26 21:17:48.046964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.732 qpair failed and we were unable to recover it. 00:21:00.732 [2024-11-26 21:17:48.056619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.732 [2024-11-26 21:17:48.056653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.732 [2024-11-26 21:17:48.056668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.732 [2024-11-26 21:17:48.056674] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.732 [2024-11-26 21:17:48.056680] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.732 [2024-11-26 21:17:48.067011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.732 qpair failed and we were unable to recover it. 00:21:00.732 [2024-11-26 21:17:48.076720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.732 [2024-11-26 21:17:48.076763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.732 [2024-11-26 21:17:48.076778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.732 [2024-11-26 21:17:48.076785] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.732 [2024-11-26 21:17:48.076794] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.732 [2024-11-26 21:17:48.086985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.732 qpair failed and we were unable to recover it. 00:21:00.732 [2024-11-26 21:17:48.096683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.732 [2024-11-26 21:17:48.096721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.732 [2024-11-26 21:17:48.096736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.732 [2024-11-26 21:17:48.096743] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.732 [2024-11-26 21:17:48.096748] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.732 [2024-11-26 21:17:48.106849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.732 qpair failed and we were unable to recover it. 00:21:00.732 [2024-11-26 21:17:48.116803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.732 [2024-11-26 21:17:48.116840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.732 [2024-11-26 21:17:48.116856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.732 [2024-11-26 21:17:48.116863] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.732 [2024-11-26 21:17:48.116868] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.732 [2024-11-26 21:17:48.127112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.732 qpair failed and we were unable to recover it. 00:21:00.732 [2024-11-26 21:17:48.136892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.732 [2024-11-26 21:17:48.136931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.732 [2024-11-26 21:17:48.136946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.732 [2024-11-26 21:17:48.136952] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.732 [2024-11-26 21:17:48.136958] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.732 [2024-11-26 21:17:48.147298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.732 qpair failed and we were unable to recover it. 00:21:00.732 [2024-11-26 21:17:48.156896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.732 [2024-11-26 21:17:48.156936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.732 [2024-11-26 21:17:48.156951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.732 [2024-11-26 21:17:48.156957] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.732 [2024-11-26 21:17:48.156963] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.990 [2024-11-26 21:17:48.167213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.990 qpair failed and we were unable to recover it. 00:21:00.990 [2024-11-26 21:17:48.177073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.990 [2024-11-26 21:17:48.177109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.990 [2024-11-26 21:17:48.177124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.990 [2024-11-26 21:17:48.177131] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.990 [2024-11-26 21:17:48.177137] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.990 [2024-11-26 21:17:48.187277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.990 qpair failed and we were unable to recover it. 00:21:00.990 [2024-11-26 21:17:48.197089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.990 [2024-11-26 21:17:48.197128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.990 [2024-11-26 21:17:48.197143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.990 [2024-11-26 21:17:48.197150] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.990 [2024-11-26 21:17:48.197155] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.990 [2024-11-26 21:17:48.207279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.990 qpair failed and we were unable to recover it. 00:21:00.990 [2024-11-26 21:17:48.217125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.990 [2024-11-26 21:17:48.217161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.990 [2024-11-26 21:17:48.217177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.991 [2024-11-26 21:17:48.217184] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.991 [2024-11-26 21:17:48.217189] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.991 [2024-11-26 21:17:48.227486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.991 qpair failed and we were unable to recover it. 00:21:00.991 [2024-11-26 21:17:48.237197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.991 [2024-11-26 21:17:48.237237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.991 [2024-11-26 21:17:48.237251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.991 [2024-11-26 21:17:48.237269] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.991 [2024-11-26 21:17:48.237275] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.991 [2024-11-26 21:17:48.247606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.991 qpair failed and we were unable to recover it. 00:21:00.991 [2024-11-26 21:17:48.257319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.991 [2024-11-26 21:17:48.257363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.991 [2024-11-26 21:17:48.257381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.991 [2024-11-26 21:17:48.257387] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.991 [2024-11-26 21:17:48.257393] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.991 [2024-11-26 21:17:48.267567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.991 qpair failed and we were unable to recover it. 00:21:00.991 [2024-11-26 21:17:48.277374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.991 [2024-11-26 21:17:48.277411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.991 [2024-11-26 21:17:48.277427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.991 [2024-11-26 21:17:48.277433] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.991 [2024-11-26 21:17:48.277439] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.991 [2024-11-26 21:17:48.287579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.991 qpair failed and we were unable to recover it. 00:21:00.991 [2024-11-26 21:17:48.297331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.991 [2024-11-26 21:17:48.297367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.991 [2024-11-26 21:17:48.297383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.991 [2024-11-26 21:17:48.297389] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.991 [2024-11-26 21:17:48.297395] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.991 [2024-11-26 21:17:48.307529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.991 qpair failed and we were unable to recover it. 00:21:00.991 [2024-11-26 21:17:48.317379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.991 [2024-11-26 21:17:48.317419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.991 [2024-11-26 21:17:48.317434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.991 [2024-11-26 21:17:48.317440] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.991 [2024-11-26 21:17:48.317445] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.991 [2024-11-26 21:17:48.327745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.991 qpair failed and we were unable to recover it. 00:21:00.991 [2024-11-26 21:17:48.337428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.991 [2024-11-26 21:17:48.337471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.991 [2024-11-26 21:17:48.337486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.991 [2024-11-26 21:17:48.337496] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.991 [2024-11-26 21:17:48.337502] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.991 [2024-11-26 21:17:48.347648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.991 qpair failed and we were unable to recover it. 00:21:00.991 [2024-11-26 21:17:48.357587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.991 [2024-11-26 21:17:48.357622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.991 [2024-11-26 21:17:48.357638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.991 [2024-11-26 21:17:48.357645] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.991 [2024-11-26 21:17:48.357650] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.991 [2024-11-26 21:17:48.367820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.991 qpair failed and we were unable to recover it. 00:21:00.991 [2024-11-26 21:17:48.377682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.991 [2024-11-26 21:17:48.377716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.991 [2024-11-26 21:17:48.377733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.991 [2024-11-26 21:17:48.377740] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.991 [2024-11-26 21:17:48.377745] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.991 [2024-11-26 21:17:48.387937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.991 qpair failed and we were unable to recover it. 00:21:00.991 [2024-11-26 21:17:48.397675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.991 [2024-11-26 21:17:48.397716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.991 [2024-11-26 21:17:48.397730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.991 [2024-11-26 21:17:48.397737] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.991 [2024-11-26 21:17:48.397743] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:00.991 [2024-11-26 21:17:48.408024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:00.991 qpair failed and we were unable to recover it. 00:21:00.991 [2024-11-26 21:17:48.417752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:00.991 [2024-11-26 21:17:48.417793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:00.991 [2024-11-26 21:17:48.417808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:00.991 [2024-11-26 21:17:48.417815] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:00.991 [2024-11-26 21:17:48.417821] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.249 [2024-11-26 21:17:48.428056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.249 qpair failed and we were unable to recover it. 00:21:01.249 [2024-11-26 21:17:48.437818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.249 [2024-11-26 21:17:48.437861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.249 [2024-11-26 21:17:48.437877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.249 [2024-11-26 21:17:48.437884] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.249 [2024-11-26 21:17:48.437889] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.249 [2024-11-26 21:17:48.448431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.249 qpair failed and we were unable to recover it. 00:21:01.249 [2024-11-26 21:17:48.457812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.249 [2024-11-26 21:17:48.457851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.249 [2024-11-26 21:17:48.457867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.249 [2024-11-26 21:17:48.457874] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.249 [2024-11-26 21:17:48.457880] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.249 [2024-11-26 21:17:48.468135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.249 qpair failed and we were unable to recover it. 00:21:01.249 [2024-11-26 21:17:48.477844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.249 [2024-11-26 21:17:48.477883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.249 [2024-11-26 21:17:48.477898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.249 [2024-11-26 21:17:48.477904] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.249 [2024-11-26 21:17:48.477910] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.249 [2024-11-26 21:17:48.488260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.249 qpair failed and we were unable to recover it. 00:21:01.249 [2024-11-26 21:17:48.497944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.249 [2024-11-26 21:17:48.497979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.249 [2024-11-26 21:17:48.497994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.249 [2024-11-26 21:17:48.498001] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.249 [2024-11-26 21:17:48.498007] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.249 [2024-11-26 21:17:48.508242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.249 qpair failed and we were unable to recover it. 00:21:01.250 [2024-11-26 21:17:48.518006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.250 [2024-11-26 21:17:48.518042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.250 [2024-11-26 21:17:48.518057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.250 [2024-11-26 21:17:48.518063] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.250 [2024-11-26 21:17:48.518069] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.250 [2024-11-26 21:17:48.528229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.250 qpair failed and we were unable to recover it. 00:21:01.250 [2024-11-26 21:17:48.538021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.250 [2024-11-26 21:17:48.538060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.250 [2024-11-26 21:17:48.538076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.250 [2024-11-26 21:17:48.538082] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.250 [2024-11-26 21:17:48.538088] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.250 [2024-11-26 21:17:48.548330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.250 qpair failed and we were unable to recover it. 00:21:01.250 [2024-11-26 21:17:48.558127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.250 [2024-11-26 21:17:48.558168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.250 [2024-11-26 21:17:48.558183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.250 [2024-11-26 21:17:48.558190] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.250 [2024-11-26 21:17:48.558195] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.250 [2024-11-26 21:17:48.568402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.250 qpair failed and we were unable to recover it. 00:21:01.250 [2024-11-26 21:17:48.578305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.250 [2024-11-26 21:17:48.578342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.250 [2024-11-26 21:17:48.578357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.250 [2024-11-26 21:17:48.578364] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.250 [2024-11-26 21:17:48.578369] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.250 [2024-11-26 21:17:48.588538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.250 qpair failed and we were unable to recover it. 00:21:01.250 [2024-11-26 21:17:48.598290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.250 [2024-11-26 21:17:48.598325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.250 [2024-11-26 21:17:48.598344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.250 [2024-11-26 21:17:48.598350] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.250 [2024-11-26 21:17:48.598356] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.250 [2024-11-26 21:17:48.608589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.250 qpair failed and we were unable to recover it. 00:21:01.250 [2024-11-26 21:17:48.618298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.250 [2024-11-26 21:17:48.618337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.250 [2024-11-26 21:17:48.618353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.250 [2024-11-26 21:17:48.618359] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.250 [2024-11-26 21:17:48.618365] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.250 [2024-11-26 21:17:48.628586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.250 qpair failed and we were unable to recover it. 00:21:01.250 [2024-11-26 21:17:48.638340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.250 [2024-11-26 21:17:48.638381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.250 [2024-11-26 21:17:48.638396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.250 [2024-11-26 21:17:48.638403] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.250 [2024-11-26 21:17:48.638409] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.250 [2024-11-26 21:17:48.648712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.250 qpair failed and we were unable to recover it. 00:21:01.250 [2024-11-26 21:17:48.658462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.250 [2024-11-26 21:17:48.658498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.250 [2024-11-26 21:17:48.658512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.250 [2024-11-26 21:17:48.658519] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.250 [2024-11-26 21:17:48.658524] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.250 [2024-11-26 21:17:48.668755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.250 qpair failed and we were unable to recover it. 00:21:01.250 [2024-11-26 21:17:48.678556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.250 [2024-11-26 21:17:48.678595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.250 [2024-11-26 21:17:48.678610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.250 [2024-11-26 21:17:48.678620] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.250 [2024-11-26 21:17:48.678625] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.508 [2024-11-26 21:17:48.688792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.508 qpair failed and we were unable to recover it. 00:21:01.508 [2024-11-26 21:17:48.698533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.508 [2024-11-26 21:17:48.698573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.508 [2024-11-26 21:17:48.698589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.508 [2024-11-26 21:17:48.698595] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.508 [2024-11-26 21:17:48.698601] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.508 [2024-11-26 21:17:48.708883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.508 qpair failed and we were unable to recover it. 00:21:01.508 [2024-11-26 21:17:48.718656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.508 [2024-11-26 21:17:48.718697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.508 [2024-11-26 21:17:48.718712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.508 [2024-11-26 21:17:48.718719] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.508 [2024-11-26 21:17:48.718725] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.508 [2024-11-26 21:17:48.728892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.508 qpair failed and we were unable to recover it. 00:21:01.508 [2024-11-26 21:17:48.738659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.508 [2024-11-26 21:17:48.738698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.508 [2024-11-26 21:17:48.738713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.508 [2024-11-26 21:17:48.738719] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.508 [2024-11-26 21:17:48.738725] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.508 [2024-11-26 21:17:48.748970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.508 qpair failed and we were unable to recover it. 00:21:01.508 [2024-11-26 21:17:48.758715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.508 [2024-11-26 21:17:48.758758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.508 [2024-11-26 21:17:48.758773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.508 [2024-11-26 21:17:48.758780] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.508 [2024-11-26 21:17:48.758786] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.508 [2024-11-26 21:17:48.768920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.509 qpair failed and we were unable to recover it. 00:21:01.509 [2024-11-26 21:17:48.778839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.509 [2024-11-26 21:17:48.778873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.509 [2024-11-26 21:17:48.778889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.509 [2024-11-26 21:17:48.778895] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.509 [2024-11-26 21:17:48.778901] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.509 [2024-11-26 21:17:48.789035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.509 qpair failed and we were unable to recover it. 00:21:01.509 [2024-11-26 21:17:48.798859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.509 [2024-11-26 21:17:48.798897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.509 [2024-11-26 21:17:48.798912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.509 [2024-11-26 21:17:48.798919] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.509 [2024-11-26 21:17:48.798925] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.509 [2024-11-26 21:17:48.809215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.509 qpair failed and we were unable to recover it. 00:21:01.509 [2024-11-26 21:17:48.818843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.509 [2024-11-26 21:17:48.818880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.509 [2024-11-26 21:17:48.818895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.509 [2024-11-26 21:17:48.818902] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.509 [2024-11-26 21:17:48.818907] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.509 [2024-11-26 21:17:48.829155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.509 qpair failed and we were unable to recover it. 00:21:01.509 [2024-11-26 21:17:48.838931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.509 [2024-11-26 21:17:48.838973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.509 [2024-11-26 21:17:48.838988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.509 [2024-11-26 21:17:48.838995] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.509 [2024-11-26 21:17:48.839000] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.509 [2024-11-26 21:17:48.849142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.509 qpair failed and we were unable to recover it. 00:21:01.509 [2024-11-26 21:17:48.859104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.509 [2024-11-26 21:17:48.859141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.509 [2024-11-26 21:17:48.859157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.509 [2024-11-26 21:17:48.859164] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.509 [2024-11-26 21:17:48.859169] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.509 [2024-11-26 21:17:48.869293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.509 qpair failed and we were unable to recover it. 00:21:01.509 [2024-11-26 21:17:48.879016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.509 [2024-11-26 21:17:48.879054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.509 [2024-11-26 21:17:48.879069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.509 [2024-11-26 21:17:48.879075] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.509 [2024-11-26 21:17:48.879080] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.509 [2024-11-26 21:17:48.889433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.509 qpair failed and we were unable to recover it. 00:21:01.509 [2024-11-26 21:17:48.899047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.509 [2024-11-26 21:17:48.899082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.509 [2024-11-26 21:17:48.899097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.509 [2024-11-26 21:17:48.899103] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.509 [2024-11-26 21:17:48.899109] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.509 [2024-11-26 21:17:48.909384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.509 qpair failed and we were unable to recover it. 00:21:01.509 [2024-11-26 21:17:48.919167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.509 [2024-11-26 21:17:48.919203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.509 [2024-11-26 21:17:48.919218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.509 [2024-11-26 21:17:48.919224] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.509 [2024-11-26 21:17:48.919230] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.509 [2024-11-26 21:17:48.929534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.509 qpair failed and we were unable to recover it. 00:21:01.509 [2024-11-26 21:17:48.939350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.509 [2024-11-26 21:17:48.939387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.509 [2024-11-26 21:17:48.939405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.509 [2024-11-26 21:17:48.939412] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.509 [2024-11-26 21:17:48.939418] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.768 [2024-11-26 21:17:48.949302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.768 qpair failed and we were unable to recover it. 00:21:01.768 [2024-11-26 21:17:48.959278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.768 [2024-11-26 21:17:48.959318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.768 [2024-11-26 21:17:48.959333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.768 [2024-11-26 21:17:48.959340] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.768 [2024-11-26 21:17:48.959345] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.768 [2024-11-26 21:17:48.969577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.768 qpair failed and we were unable to recover it. 00:21:01.768 [2024-11-26 21:17:48.979284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.768 [2024-11-26 21:17:48.979323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.768 [2024-11-26 21:17:48.979339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.768 [2024-11-26 21:17:48.979345] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.768 [2024-11-26 21:17:48.979351] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.768 [2024-11-26 21:17:48.989601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.768 qpair failed and we were unable to recover it. 00:21:01.768 [2024-11-26 21:17:48.999388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.768 [2024-11-26 21:17:48.999422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.768 [2024-11-26 21:17:48.999437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.768 [2024-11-26 21:17:48.999444] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.768 [2024-11-26 21:17:48.999449] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.768 [2024-11-26 21:17:49.009685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.768 qpair failed and we were unable to recover it. 00:21:01.768 [2024-11-26 21:17:49.019456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.768 [2024-11-26 21:17:49.019496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.768 [2024-11-26 21:17:49.019511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.768 [2024-11-26 21:17:49.019518] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.768 [2024-11-26 21:17:49.019528] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.768 [2024-11-26 21:17:49.029782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.768 qpair failed and we were unable to recover it. 00:21:01.768 [2024-11-26 21:17:49.039516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.768 [2024-11-26 21:17:49.039556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.768 [2024-11-26 21:17:49.039570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.768 [2024-11-26 21:17:49.039577] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.768 [2024-11-26 21:17:49.039583] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.769 [2024-11-26 21:17:49.049782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.769 qpair failed and we were unable to recover it. 00:21:01.769 [2024-11-26 21:17:49.059593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.769 [2024-11-26 21:17:49.059634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.769 [2024-11-26 21:17:49.059648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.769 [2024-11-26 21:17:49.059655] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.769 [2024-11-26 21:17:49.059661] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.769 [2024-11-26 21:17:49.069940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.769 qpair failed and we were unable to recover it. 00:21:01.769 [2024-11-26 21:17:49.079699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.769 [2024-11-26 21:17:49.079742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.769 [2024-11-26 21:17:49.079758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.769 [2024-11-26 21:17:49.079764] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.769 [2024-11-26 21:17:49.079770] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.769 [2024-11-26 21:17:49.090268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.769 qpair failed and we were unable to recover it. 00:21:01.769 [2024-11-26 21:17:49.099622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.769 [2024-11-26 21:17:49.099664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.769 [2024-11-26 21:17:49.099680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.769 [2024-11-26 21:17:49.099686] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.769 [2024-11-26 21:17:49.099692] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.769 [2024-11-26 21:17:49.110004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.769 qpair failed and we were unable to recover it. 00:21:01.769 [2024-11-26 21:17:49.119768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.769 [2024-11-26 21:17:49.119809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.769 [2024-11-26 21:17:49.119824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.769 [2024-11-26 21:17:49.119831] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.769 [2024-11-26 21:17:49.119836] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.769 [2024-11-26 21:17:49.130318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.769 qpair failed and we were unable to recover it. 00:21:01.769 [2024-11-26 21:17:49.139837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.769 [2024-11-26 21:17:49.139880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.769 [2024-11-26 21:17:49.139896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.769 [2024-11-26 21:17:49.139902] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.769 [2024-11-26 21:17:49.139908] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.769 [2024-11-26 21:17:49.150144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.769 qpair failed and we were unable to recover it. 00:21:01.769 [2024-11-26 21:17:49.159805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.769 [2024-11-26 21:17:49.159842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.769 [2024-11-26 21:17:49.159858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.769 [2024-11-26 21:17:49.159865] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.769 [2024-11-26 21:17:49.159871] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.769 [2024-11-26 21:17:49.170295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.769 qpair failed and we were unable to recover it. 00:21:01.769 [2024-11-26 21:17:49.179858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.769 [2024-11-26 21:17:49.179901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.769 [2024-11-26 21:17:49.179916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.769 [2024-11-26 21:17:49.179923] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.769 [2024-11-26 21:17:49.179929] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:01.769 [2024-11-26 21:17:49.190259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:01.769 qpair failed and we were unable to recover it. 00:21:01.769 [2024-11-26 21:17:49.199932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:01.769 [2024-11-26 21:17:49.199975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:01.769 [2024-11-26 21:17:49.199990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:01.769 [2024-11-26 21:17:49.199997] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:01.769 [2024-11-26 21:17:49.200002] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.026 [2024-11-26 21:17:49.210327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.026 qpair failed and we were unable to recover it. 00:21:02.026 [2024-11-26 21:17:49.219967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.026 [2024-11-26 21:17:49.220009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.026 [2024-11-26 21:17:49.220025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.026 [2024-11-26 21:17:49.220032] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.026 [2024-11-26 21:17:49.220037] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.026 [2024-11-26 21:17:49.230245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.026 qpair failed and we were unable to recover it. 00:21:02.026 [2024-11-26 21:17:49.240044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.026 [2024-11-26 21:17:49.240084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.026 [2024-11-26 21:17:49.240100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.026 [2024-11-26 21:17:49.240107] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.026 [2024-11-26 21:17:49.240112] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.026 [2024-11-26 21:17:49.250486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.026 qpair failed and we were unable to recover it. 00:21:02.026 [2024-11-26 21:17:49.260095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.026 [2024-11-26 21:17:49.260129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.026 [2024-11-26 21:17:49.260145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.026 [2024-11-26 21:17:49.260151] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.026 [2024-11-26 21:17:49.260157] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.026 [2024-11-26 21:17:49.270425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.026 qpair failed and we were unable to recover it. 00:21:02.026 [2024-11-26 21:17:49.280232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.026 [2024-11-26 21:17:49.280277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.026 [2024-11-26 21:17:49.280296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.026 [2024-11-26 21:17:49.280303] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.026 [2024-11-26 21:17:49.280308] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.026 [2024-11-26 21:17:49.290499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.026 qpair failed and we were unable to recover it. 00:21:02.026 [2024-11-26 21:17:49.300289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.026 [2024-11-26 21:17:49.300325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.026 [2024-11-26 21:17:49.300340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.026 [2024-11-26 21:17:49.300346] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.026 [2024-11-26 21:17:49.300352] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.026 [2024-11-26 21:17:49.310488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.026 qpair failed and we were unable to recover it. 00:21:02.027 [2024-11-26 21:17:49.320241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.027 [2024-11-26 21:17:49.320280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.027 [2024-11-26 21:17:49.320296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.027 [2024-11-26 21:17:49.320303] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.027 [2024-11-26 21:17:49.320309] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.027 [2024-11-26 21:17:49.330526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.027 qpair failed and we were unable to recover it. 00:21:02.027 [2024-11-26 21:17:49.340329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.027 [2024-11-26 21:17:49.340365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.027 [2024-11-26 21:17:49.340380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.027 [2024-11-26 21:17:49.340386] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.027 [2024-11-26 21:17:49.340392] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.027 [2024-11-26 21:17:49.350898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.027 qpair failed and we were unable to recover it. 00:21:02.027 [2024-11-26 21:17:49.360409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.027 [2024-11-26 21:17:49.360452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.027 [2024-11-26 21:17:49.360467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.027 [2024-11-26 21:17:49.360474] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.027 [2024-11-26 21:17:49.360483] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.027 [2024-11-26 21:17:49.370861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.027 qpair failed and we were unable to recover it. 00:21:02.027 [2024-11-26 21:17:49.380471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.027 [2024-11-26 21:17:49.380510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.027 [2024-11-26 21:17:49.380527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.027 [2024-11-26 21:17:49.380534] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.027 [2024-11-26 21:17:49.380540] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.027 [2024-11-26 21:17:49.390836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.027 qpair failed and we were unable to recover it. 00:21:02.027 [2024-11-26 21:17:49.400491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.027 [2024-11-26 21:17:49.400533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.027 [2024-11-26 21:17:49.400549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.027 [2024-11-26 21:17:49.400555] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.027 [2024-11-26 21:17:49.400561] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.027 [2024-11-26 21:17:49.410895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.027 qpair failed and we were unable to recover it. 00:21:02.027 [2024-11-26 21:17:49.420518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.027 [2024-11-26 21:17:49.420555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.027 [2024-11-26 21:17:49.420571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.027 [2024-11-26 21:17:49.420577] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.027 [2024-11-26 21:17:49.420583] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.027 [2024-11-26 21:17:49.430934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.027 qpair failed and we were unable to recover it. 00:21:02.027 [2024-11-26 21:17:49.440560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.027 [2024-11-26 21:17:49.440603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.027 [2024-11-26 21:17:49.440619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.027 [2024-11-26 21:17:49.440625] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.027 [2024-11-26 21:17:49.440631] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.027 [2024-11-26 21:17:49.450867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.027 qpair failed and we were unable to recover it. 00:21:02.027 [2024-11-26 21:17:49.460674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.027 [2024-11-26 21:17:49.460711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.027 [2024-11-26 21:17:49.460726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.027 [2024-11-26 21:17:49.460732] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.027 [2024-11-26 21:17:49.460738] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.284 [2024-11-26 21:17:49.471032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.284 qpair failed and we were unable to recover it. 00:21:02.284 [2024-11-26 21:17:49.480808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.284 [2024-11-26 21:17:49.480848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.284 [2024-11-26 21:17:49.480863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.284 [2024-11-26 21:17:49.480870] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.285 [2024-11-26 21:17:49.480876] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.285 [2024-11-26 21:17:49.491149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.285 qpair failed and we were unable to recover it. 00:21:02.285 [2024-11-26 21:17:49.500917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.285 [2024-11-26 21:17:49.500954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.285 [2024-11-26 21:17:49.500969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.285 [2024-11-26 21:17:49.500976] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.285 [2024-11-26 21:17:49.500981] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.285 [2024-11-26 21:17:49.511166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.285 qpair failed and we were unable to recover it. 00:21:02.285 [2024-11-26 21:17:49.520911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.285 [2024-11-26 21:17:49.520950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.285 [2024-11-26 21:17:49.520965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.285 [2024-11-26 21:17:49.520972] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.285 [2024-11-26 21:17:49.520978] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.285 [2024-11-26 21:17:49.531309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.285 qpair failed and we were unable to recover it. 00:21:02.285 [2024-11-26 21:17:49.540924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.285 [2024-11-26 21:17:49.540967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.285 [2024-11-26 21:17:49.540982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.285 [2024-11-26 21:17:49.540988] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.285 [2024-11-26 21:17:49.540994] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.285 [2024-11-26 21:17:49.551256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.285 qpair failed and we were unable to recover it. 00:21:02.285 [2024-11-26 21:17:49.561014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.285 [2024-11-26 21:17:49.561054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.285 [2024-11-26 21:17:49.561070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.285 [2024-11-26 21:17:49.561076] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.285 [2024-11-26 21:17:49.561082] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.285 [2024-11-26 21:17:49.571518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.285 qpair failed and we were unable to recover it. 00:21:02.285 [2024-11-26 21:17:49.581052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.285 [2024-11-26 21:17:49.581091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.285 [2024-11-26 21:17:49.581107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.285 [2024-11-26 21:17:49.581114] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.285 [2024-11-26 21:17:49.581119] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.285 [2024-11-26 21:17:49.591468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.285 qpair failed and we were unable to recover it. 00:21:02.285 [2024-11-26 21:17:49.601036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.285 [2024-11-26 21:17:49.601081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.285 [2024-11-26 21:17:49.601098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.285 [2024-11-26 21:17:49.601104] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.285 [2024-11-26 21:17:49.601110] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.285 [2024-11-26 21:17:49.611414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.285 qpair failed and we were unable to recover it. 00:21:02.285 [2024-11-26 21:17:49.621149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.285 [2024-11-26 21:17:49.621187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.285 [2024-11-26 21:17:49.621202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.285 [2024-11-26 21:17:49.621212] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.285 [2024-11-26 21:17:49.621217] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.285 [2024-11-26 21:17:49.631487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.285 qpair failed and we were unable to recover it. 00:21:02.285 [2024-11-26 21:17:49.641192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.285 [2024-11-26 21:17:49.641229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.285 [2024-11-26 21:17:49.641245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.285 [2024-11-26 21:17:49.641252] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.285 [2024-11-26 21:17:49.641261] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.285 [2024-11-26 21:17:49.651541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.285 qpair failed and we were unable to recover it. 00:21:02.285 [2024-11-26 21:17:49.661314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.285 [2024-11-26 21:17:49.661357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.285 [2024-11-26 21:17:49.661373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.285 [2024-11-26 21:17:49.661379] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.285 [2024-11-26 21:17:49.661385] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.285 [2024-11-26 21:17:49.671697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.285 qpair failed and we were unable to recover it. 00:21:02.285 [2024-11-26 21:17:49.681370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.285 [2024-11-26 21:17:49.681408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.285 [2024-11-26 21:17:49.681424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.285 [2024-11-26 21:17:49.681431] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.285 [2024-11-26 21:17:49.681436] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.285 [2024-11-26 21:17:49.691725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.285 qpair failed and we were unable to recover it. 00:21:02.285 [2024-11-26 21:17:49.701408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.286 [2024-11-26 21:17:49.701451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.286 [2024-11-26 21:17:49.701466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.286 [2024-11-26 21:17:49.701473] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.286 [2024-11-26 21:17:49.701481] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.286 [2024-11-26 21:17:49.711582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.286 qpair failed and we were unable to recover it. 00:21:02.544 [2024-11-26 21:17:49.721426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.544 [2024-11-26 21:17:49.721462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.544 [2024-11-26 21:17:49.721477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.544 [2024-11-26 21:17:49.721483] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.544 [2024-11-26 21:17:49.721489] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.544 [2024-11-26 21:17:49.732116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.544 qpair failed and we were unable to recover it. 00:21:02.544 [2024-11-26 21:17:49.741481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.544 [2024-11-26 21:17:49.741521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.544 [2024-11-26 21:17:49.741536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.544 [2024-11-26 21:17:49.741543] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.544 [2024-11-26 21:17:49.741549] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.544 [2024-11-26 21:17:49.751769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.544 qpair failed and we were unable to recover it. 00:21:02.544 [2024-11-26 21:17:49.761582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.544 [2024-11-26 21:17:49.761622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.544 [2024-11-26 21:17:49.761639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.544 [2024-11-26 21:17:49.761645] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.544 [2024-11-26 21:17:49.761651] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.544 [2024-11-26 21:17:49.771962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.544 qpair failed and we were unable to recover it. 00:21:02.544 [2024-11-26 21:17:49.781662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.544 [2024-11-26 21:17:49.781705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.544 [2024-11-26 21:17:49.781720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.544 [2024-11-26 21:17:49.781727] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.544 [2024-11-26 21:17:49.781732] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.544 [2024-11-26 21:17:49.791965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.544 qpair failed and we were unable to recover it. 00:21:02.544 [2024-11-26 21:17:49.801634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.544 [2024-11-26 21:17:49.801674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.544 [2024-11-26 21:17:49.801690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.544 [2024-11-26 21:17:49.801696] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.544 [2024-11-26 21:17:49.801702] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.544 [2024-11-26 21:17:49.811984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.544 qpair failed and we were unable to recover it. 00:21:02.544 [2024-11-26 21:17:49.821923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.544 [2024-11-26 21:17:49.821959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.544 [2024-11-26 21:17:49.821974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.544 [2024-11-26 21:17:49.821981] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.544 [2024-11-26 21:17:49.821987] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.544 [2024-11-26 21:17:49.832158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.544 qpair failed and we were unable to recover it. 00:21:02.544 [2024-11-26 21:17:49.841767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.545 [2024-11-26 21:17:49.841818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.545 [2024-11-26 21:17:49.841833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.545 [2024-11-26 21:17:49.841840] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.545 [2024-11-26 21:17:49.841846] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.545 [2024-11-26 21:17:49.852160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.545 qpair failed and we were unable to recover it. 00:21:02.545 [2024-11-26 21:17:49.861888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.545 [2024-11-26 21:17:49.861929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.545 [2024-11-26 21:17:49.861945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.545 [2024-11-26 21:17:49.861951] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.545 [2024-11-26 21:17:49.861957] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.545 [2024-11-26 21:17:49.872483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.545 qpair failed and we were unable to recover it. 00:21:02.545 [2024-11-26 21:17:49.881883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.545 [2024-11-26 21:17:49.881919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.545 [2024-11-26 21:17:49.881937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.545 [2024-11-26 21:17:49.881944] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.545 [2024-11-26 21:17:49.881949] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.545 [2024-11-26 21:17:49.892373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.545 qpair failed and we were unable to recover it. 00:21:02.545 [2024-11-26 21:17:49.901968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.545 [2024-11-26 21:17:49.902008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.545 [2024-11-26 21:17:49.902023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.545 [2024-11-26 21:17:49.902030] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.545 [2024-11-26 21:17:49.902036] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.545 [2024-11-26 21:17:49.912124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.545 qpair failed and we were unable to recover it. 00:21:02.545 [2024-11-26 21:17:49.922073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.545 [2024-11-26 21:17:49.922112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.545 [2024-11-26 21:17:49.922127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.545 [2024-11-26 21:17:49.922134] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.545 [2024-11-26 21:17:49.922139] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.545 [2024-11-26 21:17:49.932309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.545 qpair failed and we were unable to recover it. 00:21:02.545 [2024-11-26 21:17:49.941960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.545 [2024-11-26 21:17:49.942002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.545 [2024-11-26 21:17:49.942017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.545 [2024-11-26 21:17:49.942024] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.545 [2024-11-26 21:17:49.942029] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.545 [2024-11-26 21:17:49.952436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.545 qpair failed and we were unable to recover it. 00:21:02.545 [2024-11-26 21:17:49.962097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.545 [2024-11-26 21:17:49.962137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.545 [2024-11-26 21:17:49.962152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.545 [2024-11-26 21:17:49.962162] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.545 [2024-11-26 21:17:49.962167] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.545 [2024-11-26 21:17:49.972595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.545 qpair failed and we were unable to recover it. 00:21:02.803 [2024-11-26 21:17:49.982239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.803 [2024-11-26 21:17:49.982284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.803 [2024-11-26 21:17:49.982299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.803 [2024-11-26 21:17:49.982305] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.803 [2024-11-26 21:17:49.982311] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.803 [2024-11-26 21:17:49.992636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.803 qpair failed and we were unable to recover it. 00:21:02.803 [2024-11-26 21:17:50.002190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.803 [2024-11-26 21:17:50.002232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.803 [2024-11-26 21:17:50.002247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.803 [2024-11-26 21:17:50.002259] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.803 [2024-11-26 21:17:50.002265] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.803 [2024-11-26 21:17:50.012635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.803 qpair failed and we were unable to recover it. 00:21:02.803 [2024-11-26 21:17:50.022402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.803 [2024-11-26 21:17:50.022443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.803 [2024-11-26 21:17:50.022459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.803 [2024-11-26 21:17:50.022466] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.803 [2024-11-26 21:17:50.022472] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.803 [2024-11-26 21:17:50.032636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.803 qpair failed and we were unable to recover it. 00:21:02.803 [2024-11-26 21:17:50.042280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.803 [2024-11-26 21:17:50.042326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.803 [2024-11-26 21:17:50.042342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.803 [2024-11-26 21:17:50.042349] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.804 [2024-11-26 21:17:50.042355] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.804 [2024-11-26 21:17:50.052634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.804 qpair failed and we were unable to recover it. 00:21:02.804 [2024-11-26 21:17:50.062438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.804 [2024-11-26 21:17:50.062478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.804 [2024-11-26 21:17:50.062495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.804 [2024-11-26 21:17:50.062502] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.804 [2024-11-26 21:17:50.062508] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.804 [2024-11-26 21:17:50.072820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.804 qpair failed and we were unable to recover it. 00:21:02.804 [2024-11-26 21:17:50.082420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:02.804 [2024-11-26 21:17:50.082461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:02.804 [2024-11-26 21:17:50.082476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:02.804 [2024-11-26 21:17:50.082483] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:02.804 [2024-11-26 21:17:50.082489] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:02.804 [2024-11-26 21:17:50.092731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.804 qpair failed and we were unable to recover it. 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Read completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 Write completed with error (sct=0, sc=8) 00:21:03.744 starting I/O failed 00:21:03.744 [2024-11-26 21:17:51.097678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:03.744 [2024-11-26 21:17:51.105103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.744 [2024-11-26 21:17:51.105146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.744 [2024-11-26 21:17:51.105162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.745 [2024-11-26 21:17:51.105169] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.745 [2024-11-26 21:17:51.105175] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:21:03.745 [2024-11-26 21:17:51.115702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:03.745 qpair failed and we were unable to recover it. 00:21:03.745 [2024-11-26 21:17:51.125435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:21:03.745 [2024-11-26 21:17:51.125475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:21:03.745 [2024-11-26 21:17:51.125490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:21:03.745 [2024-11-26 21:17:51.125497] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:21:03.745 [2024-11-26 21:17:51.125503] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d1380 00:21:03.745 [2024-11-26 21:17:51.135700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:03.745 qpair failed and we were unable to recover it. 00:21:03.745 [2024-11-26 21:17:51.135823] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:21:03.745 A controller has encountered a failure and is being reset. 00:21:03.745 [2024-11-26 21:17:51.135939] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:21:03.745 [2024-11-26 21:17:51.171380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:21:04.023 Controller properly reset. 00:21:04.023 Initializing NVMe Controllers 00:21:04.023 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.023 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:04.023 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:04.023 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:04.023 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:04.023 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:04.023 Initialization complete. Launching workers. 00:21:04.023 Starting thread on core 1 00:21:04.023 Starting thread on core 2 00:21:04.023 Starting thread on core 3 00:21:04.023 Starting thread on core 0 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:21:04.023 00:21:04.023 real 0m12.481s 00:21:04.023 user 0m27.942s 00:21:04.023 sys 0m2.301s 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:04.023 ************************************ 00:21:04.023 END TEST nvmf_target_disconnect_tc2 00:21:04.023 ************************************ 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:04.023 ************************************ 00:21:04.023 START TEST nvmf_target_disconnect_tc3 00:21:04.023 ************************************ 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3993973 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:21:04.023 21:17:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:21:05.965 21:17:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3992625 00:21:05.965 21:17:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Write completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 Read completed with error (sct=0, sc=8) 00:21:07.340 starting I/O failed 00:21:07.340 [2024-11-26 21:17:54.491934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:21:07.907 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3992625 Killed "${NVMF_APP[@]}" "$@" 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3994628 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3994628 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3994628 ']' 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.908 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.166 [2024-11-26 21:17:55.377334] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:21:08.166 [2024-11-26 21:17:55.377398] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.166 [2024-11-26 21:17:55.453281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:08.166 [2024-11-26 21:17:55.490275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.166 [2024-11-26 21:17:55.490314] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.166 [2024-11-26 21:17:55.490323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.166 [2024-11-26 21:17:55.490330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.166 [2024-11-26 21:17:55.490336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.166 [2024-11-26 21:17:55.491834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:08.166 [2024-11-26 21:17:55.491941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:08.166 [2024-11-26 21:17:55.492046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:08.166 [2024-11-26 21:17:55.492048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Read completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 Write completed with error (sct=0, sc=8) 00:21:08.166 starting I/O failed 00:21:08.166 [2024-11-26 21:17:55.496979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:08.166 [2024-11-26 21:17:55.498564] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:08.166 [2024-11-26 21:17:55.498583] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:08.166 [2024-11-26 21:17:55.498590] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:21:08.166 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.166 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:08.166 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:08.166 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.166 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.424 Malloc0 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.424 [2024-11-26 21:17:55.690706] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19b6370/0x19c22c0) succeed. 00:21:08.424 [2024-11-26 21:17:55.699061] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19b7a00/0x1a03960) succeed. 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.424 [2024-11-26 21:17:55.836234] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.424 21:17:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3993973 00:21:09.355 [2024-11-26 21:17:56.502606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:09.355 qpair failed and we were unable to recover it. 00:21:09.355 [2024-11-26 21:17:56.504107] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:09.355 [2024-11-26 21:17:56.504124] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:09.355 [2024-11-26 21:17:56.504130] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:21:10.286 [2024-11-26 21:17:57.507800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:10.286 qpair failed and we were unable to recover it. 00:21:10.286 [2024-11-26 21:17:57.509188] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:10.286 [2024-11-26 21:17:57.509204] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:10.286 [2024-11-26 21:17:57.509210] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:21:11.218 [2024-11-26 21:17:58.513024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:11.218 qpair failed and we were unable to recover it. 00:21:11.218 [2024-11-26 21:17:58.514400] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:11.218 [2024-11-26 21:17:58.514416] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:11.218 [2024-11-26 21:17:58.514422] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:21:12.156 [2024-11-26 21:17:59.518213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:12.156 qpair failed and we were unable to recover it. 00:21:12.156 [2024-11-26 21:17:59.519708] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:12.156 [2024-11-26 21:17:59.519723] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:12.156 [2024-11-26 21:17:59.519728] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:21:13.526 [2024-11-26 21:18:00.523625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:13.526 qpair failed and we were unable to recover it. 00:21:13.527 [2024-11-26 21:18:00.525009] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:13.527 [2024-11-26 21:18:00.525025] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:13.527 [2024-11-26 21:18:00.525031] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:21:14.456 [2024-11-26 21:18:01.528682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:14.456 qpair failed and we were unable to recover it. 00:21:14.456 [2024-11-26 21:18:01.530134] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:14.456 [2024-11-26 21:18:01.530151] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:14.456 [2024-11-26 21:18:01.530157] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:21:15.384 [2024-11-26 21:18:02.533932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:21:15.384 qpair failed and we were unable to recover it. 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Write completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 Read completed with error (sct=0, sc=8) 00:21:16.315 starting I/O failed 00:21:16.315 [2024-11-26 21:18:03.538775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:16.315 [2024-11-26 21:18:03.540203] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:16.315 [2024-11-26 21:18:03.540220] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:16.315 [2024-11-26 21:18:03.540225] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:17.245 [2024-11-26 21:18:04.544025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:17.245 qpair failed and we were unable to recover it. 00:21:17.245 [2024-11-26 21:18:04.545390] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:17.245 [2024-11-26 21:18:04.545405] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:17.245 [2024-11-26 21:18:04.545411] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:21:18.175 [2024-11-26 21:18:05.549131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:21:18.175 qpair failed and we were unable to recover it. 00:21:19.544 Write completed with error (sct=0, sc=8) 00:21:19.544 starting I/O failed 00:21:19.544 Read completed with error (sct=0, sc=8) 00:21:19.544 starting I/O failed 00:21:19.544 Read completed with error (sct=0, sc=8) 00:21:19.544 starting I/O failed 00:21:19.544 Read completed with error (sct=0, sc=8) 00:21:19.544 starting I/O failed 00:21:19.544 Read completed with error (sct=0, sc=8) 00:21:19.544 starting I/O failed 00:21:19.544 Read completed with error (sct=0, sc=8) 00:21:19.544 starting I/O failed 00:21:19.544 Write completed with error (sct=0, sc=8) 00:21:19.544 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Write completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 Read completed with error (sct=0, sc=8) 00:21:19.545 starting I/O failed 00:21:19.545 [2024-11-26 21:18:06.553968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:21:19.545 [2024-11-26 21:18:06.555376] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:19.545 [2024-11-26 21:18:06.555393] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:19.545 [2024-11-26 21:18:06.555400] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cef80 00:21:20.476 [2024-11-26 21:18:07.559194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:21:20.476 qpair failed and we were unable to recover it. 00:21:20.476 [2024-11-26 21:18:07.560617] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:20.476 [2024-11-26 21:18:07.560632] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:20.476 [2024-11-26 21:18:07.560638] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cef80 00:21:21.409 [2024-11-26 21:18:08.564418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:21:21.409 qpair failed and we were unable to recover it. 00:21:21.409 [2024-11-26 21:18:08.564519] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:21:21.409 A controller has encountered a failure and is being reset. 00:21:21.409 Resorting to new failover address 192.168.100.9 00:21:21.409 [2024-11-26 21:18:08.566081] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:21.409 [2024-11-26 21:18:08.566104] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:21.409 [2024-11-26 21:18:08.566112] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:21:22.342 [2024-11-26 21:18:09.569942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:21:22.342 qpair failed and we were unable to recover it. 00:21:22.342 [2024-11-26 21:18:09.571353] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:22.342 [2024-11-26 21:18:09.571370] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:22.342 [2024-11-26 21:18:09.571376] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:21:23.273 [2024-11-26 21:18:10.575323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:21:23.273 qpair failed and we were unable to recover it. 00:21:23.273 [2024-11-26 21:18:10.575427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:21:23.273 [2024-11-26 21:18:10.575530] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:21:23.273 [2024-11-26 21:18:10.607463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:21:23.273 Controller properly reset. 00:21:23.273 Initializing NVMe Controllers 00:21:23.273 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.273 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:23.273 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:21:23.273 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:21:23.273 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:21:23.273 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:21:23.273 Initialization complete. Launching workers. 00:21:23.273 Starting thread on core 1 00:21:23.273 Starting thread on core 2 00:21:23.273 Starting thread on core 3 00:21:23.273 Starting thread on core 0 00:21:23.273 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:21:23.273 00:21:23.273 real 0m19.336s 00:21:23.273 user 1m5.169s 00:21:23.273 sys 0m4.604s 00:21:23.273 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.274 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.274 ************************************ 00:21:23.274 END TEST nvmf_target_disconnect_tc3 00:21:23.274 ************************************ 00:21:23.274 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:21:23.274 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:21:23.274 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:23.274 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:21:23.274 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:23.274 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:23.274 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:21:23.274 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:23.274 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:23.274 rmmod nvme_rdma 00:21:23.531 rmmod nvme_fabrics 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3994628 ']' 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3994628 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3994628 ']' 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3994628 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3994628 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3994628' 00:21:23.531 killing process with pid 3994628 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3994628 00:21:23.531 21:18:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3994628 00:21:23.790 21:18:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:23.790 21:18:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:23.790 00:21:23.790 real 0m38.866s 00:21:23.790 user 2m37.179s 00:21:23.790 sys 0m11.542s 00:21:23.790 21:18:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.790 21:18:11 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:23.790 ************************************ 00:21:23.790 END TEST nvmf_target_disconnect 00:21:23.790 ************************************ 00:21:23.790 21:18:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:21:23.790 00:21:23.790 real 5m10.247s 00:21:23.790 user 12m38.364s 00:21:23.790 sys 1m25.421s 00:21:23.790 21:18:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.790 21:18:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:23.790 ************************************ 00:21:23.790 END TEST nvmf_host 00:21:23.790 ************************************ 00:21:23.790 21:18:11 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:21:23.790 00:21:23.790 real 16m4.359s 00:21:23.790 user 40m25.164s 00:21:23.790 sys 4m33.555s 00:21:23.790 21:18:11 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.790 21:18:11 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:23.790 ************************************ 00:21:23.790 END TEST nvmf_rdma 00:21:23.790 ************************************ 00:21:23.790 21:18:11 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:21:23.790 21:18:11 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:23.790 21:18:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.790 21:18:11 -- common/autotest_common.sh@10 -- # set +x 00:21:23.790 ************************************ 00:21:23.790 START TEST spdkcli_nvmf_rdma 00:21:23.790 ************************************ 00:21:23.790 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:21:23.790 * Looking for test storage... 00:21:23.790 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:21:23.790 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:24.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.050 --rc genhtml_branch_coverage=1 00:21:24.050 --rc genhtml_function_coverage=1 00:21:24.050 --rc genhtml_legend=1 00:21:24.050 --rc geninfo_all_blocks=1 00:21:24.050 --rc geninfo_unexecuted_blocks=1 00:21:24.050 00:21:24.050 ' 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:24.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.050 --rc genhtml_branch_coverage=1 00:21:24.050 --rc genhtml_function_coverage=1 00:21:24.050 --rc genhtml_legend=1 00:21:24.050 --rc geninfo_all_blocks=1 00:21:24.050 --rc geninfo_unexecuted_blocks=1 00:21:24.050 00:21:24.050 ' 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:24.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.050 --rc genhtml_branch_coverage=1 00:21:24.050 --rc genhtml_function_coverage=1 00:21:24.050 --rc genhtml_legend=1 00:21:24.050 --rc geninfo_all_blocks=1 00:21:24.050 --rc geninfo_unexecuted_blocks=1 00:21:24.050 00:21:24.050 ' 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:24.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.050 --rc genhtml_branch_coverage=1 00:21:24.050 --rc genhtml_function_coverage=1 00:21:24.050 --rc genhtml_legend=1 00:21:24.050 --rc geninfo_all_blocks=1 00:21:24.050 --rc geninfo_unexecuted_blocks=1 00:21:24.050 00:21:24.050 ' 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.050 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00bafac1-9c9c-e711-906e-0017a4403562 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00bafac1-9c9c-e711-906e-0017a4403562 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:24.051 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3998139 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3998139 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 3998139 ']' 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.051 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:24.051 [2024-11-26 21:18:11.382244] Starting SPDK v25.01-pre git sha1 fb1630bf7 / DPDK 24.03.0 initialization... 00:21:24.051 [2024-11-26 21:18:11.382293] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3998139 ] 00:21:24.051 [2024-11-26 21:18:11.440108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:24.051 [2024-11-26 21:18:11.478266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.051 [2024-11-26 21:18:11.478273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:24.309 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:24.310 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:24.310 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.310 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:24.310 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.310 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:24.310 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:24.310 21:18:11 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:21:24.310 21:18:11 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1015)' 00:21:30.870 Found 0000:18:00.0 (0x15b3 - 0x1015) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1015)' 00:21:30.870 Found 0000:18:00.1 (0x15b3 - 0x1015) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:30.870 Found net devices under 0000:18:00.0: mlx_0_0 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:30.870 Found net devices under 0000:18:00.1: mlx_0_1 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:30.870 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:30.871 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:30.871 link/ether 50:6b:4b:b4:ac:7a brd ff:ff:ff:ff:ff:ff 00:21:30.871 altname enp24s0f0np0 00:21:30.871 altname ens785f0np0 00:21:30.871 inet 192.168.100.8/24 scope global mlx_0_0 00:21:30.871 valid_lft forever preferred_lft forever 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:30.871 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:30.871 link/ether 50:6b:4b:b4:ac:7b brd ff:ff:ff:ff:ff:ff 00:21:30.871 altname enp24s0f1np1 00:21:30.871 altname ens785f1np1 00:21:30.871 inet 192.168.100.9/24 scope global mlx_0_1 00:21:30.871 valid_lft forever preferred_lft forever 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:21:30.871 192.168.100.9' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:21:30.871 192.168.100.9' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:21:30.871 192.168.100.9' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:30.871 21:18:17 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:30.871 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:30.871 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:21:30.871 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:21:30.871 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:21:30.871 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:21:30.871 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:21:30.871 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:30.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:21:30.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:21:30.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:30.871 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:30.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:21:30.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:30.871 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:30.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:21:30.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:21:30.871 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:21:30.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:21:30.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:30.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:21:30.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:21:30.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:21:30.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:21:30.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:21:30.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:21:30.872 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:21:30.872 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:21:30.872 ' 00:21:32.772 [2024-11-26 21:18:19.790643] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17d00b0/0x17dd980) succeed. 00:21:32.772 [2024-11-26 21:18:19.799013] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17d1790/0x185d9c0) succeed. 00:21:33.707 [2024-11-26 21:18:21.084712] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:21:36.236 [2024-11-26 21:18:23.359686] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:21:38.198 [2024-11-26 21:18:25.321994] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:21:39.611 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:21:39.611 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:21:39.611 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:21:39.611 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:21:39.611 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:21:39.612 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:21:39.612 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:21:39.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:21:39.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:21:39.612 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:21:39.612 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:21:39.612 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:21:39.612 21:18:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:21:39.612 21:18:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:39.612 21:18:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:39.612 21:18:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:21:39.612 21:18:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.612 21:18:26 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:39.612 21:18:26 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:21:39.612 21:18:26 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:21:40.175 21:18:27 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:21:40.175 21:18:27 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:21:40.175 21:18:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:21:40.175 21:18:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.175 21:18:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:40.175 21:18:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:21:40.175 21:18:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.175 21:18:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:40.175 21:18:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:21:40.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:21:40.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:40.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:21:40.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:21:40.175 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:21:40.175 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:21:40.175 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:21:40.175 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:21:40.175 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:21:40.175 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:21:40.175 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:21:40.175 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:21:40.175 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:21:40.175 ' 00:21:45.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:21:45.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:21:45.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:45.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:21:45.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:21:45.438 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:21:45.438 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:21:45.438 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:21:45.438 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:21:45.438 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:21:45.438 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:21:45.438 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:21:45.438 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:21:45.438 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3998139 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 3998139 ']' 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 3998139 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3998139 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3998139' 00:21:45.438 killing process with pid 3998139 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 3998139 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 3998139 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:45.438 rmmod nvme_rdma 00:21:45.438 rmmod nvme_fabrics 00:21:45.438 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:45.696 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:21:45.696 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:21:45.696 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:21:45.696 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:45.696 21:18:32 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:21:45.696 00:21:45.696 real 0m21.737s 00:21:45.696 user 0m46.313s 00:21:45.696 sys 0m5.029s 00:21:45.696 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:45.696 21:18:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:45.696 ************************************ 00:21:45.696 END TEST spdkcli_nvmf_rdma 00:21:45.696 ************************************ 00:21:45.696 21:18:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:45.696 21:18:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:45.696 21:18:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:45.696 21:18:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:45.696 21:18:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:45.696 21:18:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:45.696 21:18:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:45.696 21:18:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:45.696 21:18:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:45.696 21:18:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:45.696 21:18:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:21:45.696 21:18:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:45.696 21:18:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:45.696 21:18:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:45.696 21:18:32 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:21:45.696 21:18:32 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:21:45.696 21:18:32 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:21:45.696 21:18:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.696 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:21:45.696 21:18:32 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:21:45.696 21:18:32 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:21:45.696 21:18:32 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:21:45.696 21:18:32 -- common/autotest_common.sh@10 -- # set +x 00:21:50.959 INFO: APP EXITING 00:21:50.959 INFO: killing all VMs 00:21:50.959 INFO: killing vhost app 00:21:50.959 WARN: no vhost pid file found 00:21:50.959 INFO: EXIT DONE 00:21:52.856 Waiting for block devices as requested 00:21:52.856 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:52.856 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:53.113 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:53.113 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:53.113 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:53.113 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:53.371 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:53.371 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:53.371 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:53.371 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:53.630 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:53.630 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:53.630 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:53.888 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:53.888 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:53.888 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:53.888 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:21:58.085 Cleaning 00:21:58.085 Removing: /var/run/dpdk/spdk0/config 00:21:58.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:58.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:58.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:58.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:58.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:21:58.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:21:58.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:21:58.085 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:21:58.085 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:58.085 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:58.085 Removing: /var/run/dpdk/spdk1/config 00:21:58.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:58.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:58.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:58.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:58.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:21:58.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:21:58.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:21:58.085 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:21:58.085 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:58.085 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:58.085 Removing: /var/run/dpdk/spdk1/mp_socket 00:21:58.085 Removing: /var/run/dpdk/spdk2/config 00:21:58.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:58.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:58.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:58.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:58.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:21:58.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:21:58.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:21:58.085 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:21:58.085 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:58.085 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:58.086 Removing: /var/run/dpdk/spdk3/config 00:21:58.086 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:58.086 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:58.086 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:58.086 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:58.086 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:21:58.086 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:21:58.086 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:21:58.086 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:21:58.086 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:58.086 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:58.086 Removing: /var/run/dpdk/spdk4/config 00:21:58.086 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:58.086 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:58.086 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:58.086 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:58.086 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:21:58.086 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:21:58.086 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:21:58.086 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:21:58.086 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:58.086 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:58.086 Removing: /dev/shm/bdevperf_trace.pid3744689 00:21:58.086 Removing: /dev/shm/bdev_svc_trace.1 00:21:58.086 Removing: /dev/shm/nvmf_trace.0 00:21:58.086 Removing: /dev/shm/spdk_tgt_trace.pid3699849 00:21:58.086 Removing: /var/run/dpdk/spdk0 00:21:58.086 Removing: /var/run/dpdk/spdk1 00:21:58.086 Removing: /var/run/dpdk/spdk2 00:21:58.086 Removing: /var/run/dpdk/spdk3 00:21:58.086 Removing: /var/run/dpdk/spdk4 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3696547 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3698125 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3699849 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3700303 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3701379 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3701643 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3702748 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3702757 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3703141 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3708326 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3710237 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3710605 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3710795 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3711017 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3711341 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3711623 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3711904 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3712223 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3712812 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3716041 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3716243 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3716522 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3716540 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3717090 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3717103 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3717648 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3717658 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3717948 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3718056 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3718241 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3718385 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3718883 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3719163 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3719499 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3723466 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3727541 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3739050 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3739856 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3744689 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3744969 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3748839 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3754775 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3757615 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3767555 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3792286 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3795896 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3839154 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3844298 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3849953 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3859170 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3899897 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3900837 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3901802 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3903009 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3907613 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3913958 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3920763 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3921778 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3922610 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3923662 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3924184 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3928500 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3928502 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3933022 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3933636 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3934164 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3934953 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3934985 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3940080 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3940735 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3945112 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3948092 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3954143 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3964567 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3964571 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3985101 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3985373 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3991437 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3991838 00:21:58.086 Removing: /var/run/dpdk/spdk_pid3993973 00:21:58.344 Removing: /var/run/dpdk/spdk_pid3998139 00:21:58.344 Clean 00:21:58.344 21:18:45 -- common/autotest_common.sh@1453 -- # return 0 00:21:58.344 21:18:45 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:21:58.344 21:18:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.344 21:18:45 -- common/autotest_common.sh@10 -- # set +x 00:21:58.344 21:18:45 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:21:58.344 21:18:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:58.344 21:18:45 -- common/autotest_common.sh@10 -- # set +x 00:21:58.344 21:18:45 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:21:58.344 21:18:45 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:21:58.344 21:18:45 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:21:58.344 21:18:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:21:58.344 21:18:45 -- spdk/autotest.sh@398 -- # hostname 00:21:58.344 21:18:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-37 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:21:58.601 geninfo: WARNING: invalid characters removed from testname! 00:22:16.708 21:19:03 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:19.247 21:19:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:20.626 21:19:07 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:22.000 21:19:09 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:23.899 21:19:11 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:25.274 21:19:12 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:22:27.176 21:19:14 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:27.176 21:19:14 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:27.176 21:19:14 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:22:27.176 21:19:14 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:27.176 21:19:14 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:27.176 21:19:14 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:22:27.176 + [[ -n 3617653 ]] 00:22:27.176 + sudo kill 3617653 00:22:27.186 [Pipeline] } 00:22:27.205 [Pipeline] // stage 00:22:27.213 [Pipeline] } 00:22:27.229 [Pipeline] // timeout 00:22:27.234 [Pipeline] } 00:22:27.252 [Pipeline] // catchError 00:22:27.257 [Pipeline] } 00:22:27.272 [Pipeline] // wrap 00:22:27.278 [Pipeline] } 00:22:27.292 [Pipeline] // catchError 00:22:27.301 [Pipeline] stage 00:22:27.303 [Pipeline] { (Epilogue) 00:22:27.316 [Pipeline] catchError 00:22:27.318 [Pipeline] { 00:22:27.330 [Pipeline] echo 00:22:27.332 Cleanup processes 00:22:27.339 [Pipeline] sh 00:22:27.620 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:27.620 4013117 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:27.634 [Pipeline] sh 00:22:27.915 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:22:27.915 ++ grep -v 'sudo pgrep' 00:22:27.915 ++ awk '{print $1}' 00:22:27.915 + sudo kill -9 00:22:27.915 + true 00:22:27.927 [Pipeline] sh 00:22:28.207 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:36.517 [Pipeline] sh 00:22:36.797 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:36.797 Artifacts sizes are good 00:22:36.812 [Pipeline] archiveArtifacts 00:22:36.819 Archiving artifacts 00:22:36.951 [Pipeline] sh 00:22:37.235 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:22:37.247 [Pipeline] cleanWs 00:22:37.255 [WS-CLEANUP] Deleting project workspace... 00:22:37.256 [WS-CLEANUP] Deferred wipeout is used... 00:22:37.261 [WS-CLEANUP] done 00:22:37.263 [Pipeline] } 00:22:37.279 [Pipeline] // catchError 00:22:37.289 [Pipeline] sh 00:22:37.567 + logger -p user.info -t JENKINS-CI 00:22:37.575 [Pipeline] } 00:22:37.588 [Pipeline] // stage 00:22:37.593 [Pipeline] } 00:22:37.605 [Pipeline] // node 00:22:37.609 [Pipeline] End of Pipeline 00:22:37.641 Finished: SUCCESS